repo_id
stringclasses 927
values | file_path
stringlengths 99
214
| content
stringlengths 2
4.15M
|
---|---|---|
proposal | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/README.md | # Proposing Changes to Go
## Introduction
The Go project's development process is design-driven.
Significant changes to the language, libraries, or tools
(which includes API changes in the main repo and all golang.org/x repos,
as well as command-line changes to the `go` command)
must be first discussed, and sometimes formally documented,
before they can be implemented.
This document describes the process for proposing, documenting, and
implementing changes to the Go project.
To learn more about Go's origins and development process, see the talks
[How Go Was Made](https://talks.golang.org/2015/how-go-was-made.slide),
[The Evolution of Go](https://talks.golang.org/2015/gophercon-goevolution.slide),
and [Go, Open Source, Community](https://blog.golang.org/open-source)
from GopherCon 2015.
## The Proposal Process
The proposal process is the process for reviewing a proposal and reaching
a decision about whether to accept or decline the proposal.
1. The proposal author [creates a brief issue](https://golang.org/issue/new) describing the proposal.\
Note: There is no need for a design document at this point.\
Note: A non-proposal issue can be turned into a proposal by simply adding the proposal label.\
Note: [Language changes](#language-changes) should follow a separate [template](go2-language-changes.md)
2. A discussion on the issue tracker aims to triage the proposal into one of three outcomes:
- Accept proposal, or
- decline proposal, or
- ask for a design doc.
If the proposal is accepted or declined, the process is done.
Otherwise the discussion is expected to identify concerns that
should be addressed in a more detailed design.
3. The proposal author writes a [design doc](#design-documents) to work out details of the proposed
design and address the concerns raised in the initial discussion.
4. Once comments and revisions on the design doc wind down, there is a final
discussion on the issue, to reach one of two outcomes:
- Accept proposal or
- decline proposal.
After the proposal is accepted or declined (whether after step 2 or step 4),
implementation work proceeds in the same way as any other contribution.
## Detail
### Goals
- Make sure that proposals get a proper, fair, timely, recorded evaluation with
a clear answer.
- Make past proposals easy to find, to avoid duplicated effort.
- If a design doc is needed, make sure contributors know how to write a good one.
### Definitions
- A **proposal** is a suggestion filed as a GitHub issue, identified by having
the Proposal label.
- A **design doc** is the expanded form of a proposal, written when the
proposal needs more careful explanation and consideration.
### Scope
The proposal process should be used for any notable change or addition to the
language, libraries and tools.
“Notable” includes (but is not limited to):
- API changes in the main repo and all golang.org/x repos.
- Command-line changes to the `go` command.
- Any visible behavior changes that need a [GODEBUG setting](https://go.dev/doc/godebug) for compatibility.
- Any other visible behavior changes in existing functionality.
- Adoption or use of new protocols, protocol versions, cryptographic algorithms, and the like,
even in an implementation.
Such changes are externally visible and require discussion and probably a GODEBUG setting.
Since proposals begin (and will often end) with the filing of an issue, even
small changes can go through the proposal process if appropriate.
Deciding what is appropriate is matter of judgment we will refine through
experience.
If in doubt, file a proposal.
There is a short list of changes that are typically not in scope for the proposal process:
- Making API changes in internal packages, since those APIs are not publicly visible.
- Making API or command-line changes in golang.org/x/build, since that is code to run the Go project, not for users to import and depend on.
- Adding new system call numbers or direct system call wrappers (`//sys` lines) in golang.org/x/sys.
- Adding new C-equivalent data structures to support those system calls.
Again, if in doubt, file a proposal.
### Compatibility
Programs written for Go version 1.x must continue to compile and work with
future versions of Go 1.
The [Go 1 compatibility document](https://golang.org/doc/go1compat) describes
the promise we have made to Go users for the future of Go 1.x.
Any proposed change must not break this promise.
### Language changes
In 2018 we started a Go 2 process during which we may change the
language, as described on [the Go
blog](https://blog.golang.org/go2-here-we-come).
Language changes should follow the proposal process described here.
As explained in the blog entry, language change proposals should
- address an important issue for many people,
- have minimal impact on everybody else, and
- come with a clear and well-understood solution.
Proposals should follow the [Go 2 template](go2-language-changes.md).
See the [Go 2 review minutes](https://golang.org/issue/33892)
and the [release notes](https://golang.org/doc/devel/release.html) for
examples of recent language changes.
### Design Documents
As noted above, some (but not all) proposals need to be elaborated in a design document.
- The design doc should be checked in to [the proposal repository](https://github.com/golang/proposal/) as `design/NNNN-shortname.md`,
where `NNNN` is the GitHub issue number and `shortname` is a short name
(a few dash-separated words at most).
Clone this repository with `git clone https://go.googlesource.com/proposal`
and follow the usual [Gerrit workflow for Go](https://golang.org/doc/contribute.html#Code_review).
- The design doc should follow [the template](design/TEMPLATE.md).
- The design doc should address any specific concerns raised during the initial discussion.
- It is expected that the design doc may go through multiple checked-in revisions.
New design doc authors may be paired with a design doc "shepherd" to help work on the doc.
- For ease of review with Gerrit, design documents should be wrapped around the
80 column mark.
[Each sentence should start on a new line](http://rhodesmill.org/brandon/2012/one-sentence-per-line/)
so that comments can be made accurately and the diff kept shorter.
- In Emacs, loading `fill.el` from this directory will make `fill-paragraph` format text this way.
- Comments on Gerrit CLs should be restricted to grammar, spelling,
or procedural errors related to the preparation of the proposal itself.
All other comments should be addressed to the related GitHub issue.
### Quick Start for Experienced Committers
Experienced committers who are certain that a design doc will be
required for a particular proposal
can skip steps 1 and 2 and include the design doc with the initial issue.
In the worst case, skipping these steps only leads to an unnecessary design doc.
### Proposal Review
A group of Go team members holds “proposal review meetings”
approximately weekly to review pending proposals.
The principal goal of the review meeting is to make sure that proposals
are receiving attention from the right people,
by cc'ing relevant developers, raising important questions,
pinging lapsed discussions, and generally trying to guide discussion
toward agreement about the outcome.
The discussion itself is expected to happen on the issue tracker,
so that anyone can take part.
The proposal review meetings also identify issues where
consensus has been reached and the process can be
advanced to the next step (by marking the proposal accepted
or declined or by asking for a design doc).
Minutes are posted to [golang.org/s/proposal-minutes](https://golang.org/s/proposal-minutes)
after the conclusion of the weekly meeting, so that anyone
interested in which proposals are under active consideration
can follow that issue.
Proposal issues are tracked in the
[Proposals project](https://github.com/orgs/golang/projects/17) on the Go issue tracker.
The current state of the proposal is captured by the columns in that project,
as described below.
The proposal review group can, at their discretion, make exceptions for
proposals that need not go through all the stages, fast-tracking them
to Likely Accept/Likely Decline or even Accept/Decline, such as for
proposals that do not merit the full review or that need to be considered
quickly due to pending releases.
#### Incoming
New proposals are added to the Incoming column.
The weekly proposal review meetings aim to look at all the issues
in the Active, Likely Accept, and Likely Decline columns.
If time is left over, then proposals from Incoming are selected
to be moved to Active.
Holding proposals in Incoming until attention can be devoted to them
(at which they move to Active, and then onward) ensures that
progress is made moving active proposals out to Accepted or Declined,
so we avoid [receive livelock](http://www.news.cs.nyu.edu/~jinyang/sp09/readings/mogul96usenix.pdf),
in which accepting new work prevents finishing old work.
#### Active
Issues in the Active column are reviewed at each weekly proposal meeting
to watch for emerging consensus in the discussions.
The proposal review group may also comment, make suggestions,
ask clarifying questions, and try to restate the proposals to make sure
everyone agrees about what exactly is being discussed.
#### Likely Accept
If an issue discussion seems to have reached
a consensus to accept the proposal, the proposal review group
moves the issue to the Likely Accept column
and posts a comment noting that change.
This marks the final period for comments that might
change the recognition of consensus.
#### Likely Decline
If a proposal discussion seems to have reached
a consensus to decline the proposal, the proposal review group
moves the issue to the Likely Decline column.
An issue may also be moved to Likely Decline if the
proposal review group identifies that no consensus
is likely to be reached and that the default of not accepting
the proposal is appropriate.
Just as for Likely Accept, the group posts a comment noting the change,
and this marks the final period for comments that might
change the recognition of consensus.
#### Accepted
A week after a proposal moves to Likely Accept, absent a change in consensus,
the proposal review group moves the proposal to the Accepted column.
If significant discussion happens during that week,
the proposal review group may leave the proposal
in Likely Accept for another week or even move the proposal back to Active.
Once a proposal is marked Accepted, the Proposal-Accepted label is applied,
it is moved out of the Proposal milestone into a work milestone,
and the issue is repurposed to track the work of implementing the proposal.
The default work milestone is Backlog, indicating
that the work applies to the Go release itself but is not slated for a particular release.
Another common next milestone is Unreleased, used for work that is not part
of any Go release (for example, work in parts of golang.org/x that are not vendored
into the standard releases).
#### Declined
A week after a proposal moves to Likely Decline, absent a change in consensus,
the proposal review group moves the proposal to the Declined column.
If significant discussion happens during that week,
the proposal review group may leave the proposal
in Likely Decline for another week or even move the proposal back to Active.
Once a proposal is marked Declined, it is closed.
#### Declined as Duplicate
If a proposal duplicates a previously decided proposal,
the proposal review group may decline the proposal as a duplicate
without progressing through the Active or Likely Decline stages.
Generally speaking, our approach to reconsidering previously decided proposals
follows John Ousterhout's advice in his post
“[Open Decision-Making](https://web.stanford.edu/~ouster/cgi-bin/decisions.php),”
in particular the “Reconsideration” section.
#### Declined as Infeasible
If a proposal directly contradicts the core design of the language or of a package,
or if a proposal is impossible to implement efficiently or at all,
the proposal review group may decline the proposal as infeasible
without progressing through the Active or Likely Decline stages.
If it seems like there is still general interest from others,
or that discussion may lead to a feasible proposal,
the proposal may also be kept open and the discussion continued.
#### Declined as Retracted
If a proposal is closed or retracted in a comment by the original author,
the proposal review group may decline the proposal as retracted
without progressing through the Active or Likely Decline stages.
If it seems like there is still general interest from others, the proposal
may also be kept open and the discussion continued.
#### Declined as Obsolete
If a proposal is obsoleted by changes to Go that have been made
since the proposal was filed, the proposal review group may decline
the proposal as obsolete without progressing through the Active or
Likely Decline stages.
If it seems like there is still general interest from others,
or that discussion may lead to a different, non-obsolete proposal,
the proposal may also be kept open and the discussion continued.
#### Hold
If discussion of a proposal requires design revisions or additional information
that will not be available for a couple weeks or more, the proposal review group
moves the proposal to the Hold column with a note of what it is waiting on.
Once that thing is ready, anyone who can edit the issue tracker can move the
proposal back to the Active column for consideration at the next proposal review meeting.
### Consensus and Disagreement
The goal of the proposal process is to reach general consensus about the outcome
in a timely manner.
If proposal review cannot identify a general consensus
in the discussion of the issue on the issue tracker,
the usual result is that the proposal is declined.
It can happen that proposal review may not identify a
general consensus and yet it is clear that the proposal
should not be outright declined.
As one example, there may be a consensus that some solution
to a problem is important, but no consensus on which of
two competing solutions should be adopted.
If the proposal review group cannot identify a consensus
nor a next step for the proposal, the decision about the path forward
passes to the Go architects (currently [gri@](mailto:gri@golang.org),
[iant@](mailto:iant@golang.org), and [rsc@](mailto:rsc@golang.org)),
who review the discussion and aim to reach a consensus among themselves.
If so, they document the decision and its rationale on the issue.
If consensus among the architects cannot be reached,
which is even more unusual,
the arbiter (currently [rsc@](mailto:rsc@golang.org))
reviews the discussion and decides the next step,
documenting the decision and its rationale on the issue.
## Help
If you need help with this process, please contact the Go contributors by posting
to the [golang-dev mailing list](https://groups.google.com/group/golang-dev).
(Note that the list is moderated, and that first-time posters should expect a
delay while their message is held for moderation.)
To learn about contributing to Go in general, see the
[contribution guidelines](https://golang.org/doc/contribute.html).
|
proposal | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/fill.el | ;; Copyright 2017 The Go Authors. All rights reserved.
;; Use of this source code is governed by a BSD-style
;; license that can be found in the LICENSE file.
;; This makes fill-paragraph (M-q) add line breaks at sentence
;; boundaries in addition to normal wrapping. This is the style for Go
;; proposals.
;;
;; Loading this script automatically enables this for markdown-mode
;; buffers in the go-design/proposal directory. It can also be
;; manually enabled with M-x enable-fill-split-sentences.
;;
;; This is sensitive to the setting of `sentence-end-double-space`,
;; which defaults to t. If `sentence-end-double-space` is t, but a
;; paragraph only a single space between sentences, this will not
;; insert line breaks where expected.
(defun fill-split-sentences (&optional justify)
"Fill paragraph at point, breaking lines at sentence boundaries."
(interactive)
(save-excursion
;; Do a trial fill and get the fill prefix for this paragraph.
(let ((prefix (or (fill-paragraph) ""))
(end (progn (fill-forward-paragraph 1) (point)))
(beg (progn (fill-forward-paragraph -1) (point))))
(save-restriction
(narrow-to-region (line-beginning-position) end)
;; Unfill the paragraph.
(let ((fill-column (point-max)))
(fill-region beg end))
;; Fill each sentence.
(goto-char (point-min))
(while (not (eobp))
(if (bobp)
;; Skip over initial prefix.
(goto-char beg)
;; Clean up space between sentences.
(skip-chars-forward " \t")
(delete-horizontal-space 'backward-only)
(insert "\n" prefix))
(let ((sbeg (point))
(fill-prefix prefix))
(forward-sentence)
(fill-region-as-paragraph sbeg (point)))))
prefix)))
(defun enable-fill-split-sentences ()
"Make fill break lines at sentence boundaries in this buffer."
(interactive)
(setq-local fill-paragraph-function #'fill-split-sentences))
(defun proposal-enable-fill-split ()
(when (string-match "go-proposal/design/" (buffer-file-name))
(enable-fill-split-sentences)))
;; Enable sentence splitting in new proposal buffers.
(add-hook 'markdown-mode-hook #'proposal-enable-fill-split)
;; Enable sentence splitting in this buffer, in case the user loaded
;; fill.el when already in a buffer.
(when (eq major-mode 'markdown-mode)
(proposal-enable-fill-split))
|
proposal | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/go2-language-changes.md | # Go 2 language change template
Authors: Ian Lance Taylor, Robert Griesemer, Brad Fitzpatrick
Last updated: January, 2020
## Introduction
We get more language change proposals than we have time to review
thoroughly.
Changing the language has serious consequences that could affect the
entire Go ecosystem, so many factors come into consideration.
If you just have an idea for a language change, and would like help
turning it into a complete proposal, we ask that you not open an
issue, but instead discuss the idea on a forum such as [the
golang-nuts mailing
list](https://groups.google.com/forum/#!forum/golang-nuts).
Before proceeding with a full proposal, please review the requirements
listed in the Go blog article [Go 2, here we
come!](https://blog.golang.org/go2-here-we-come): Each language change
proposal must:
1. address an important issue for many people,
1. have minimal impact on everybody else, and
1. come with a clear and well-understood solution.
If you believe that your proposal meets these criteria and wish to
proceed, then in order to help with review we ask that you place your
proposal in context by answering the questions below as best you can.
You do not have to answer every question but please do your best.
## Template
- Would you consider yourself a novice, intermediate, or experienced Go programmer?
- What other languages do you have experience with?
- Would this change make Go easier or harder to learn, and why?
- Has this idea, or one like it, been proposed before?
- If so, how does this proposal differ?
- Who does this proposal help, and why?
- What is the proposed change?
- Please describe as precisely as possible the change to the language.
- What would change in the [language spec](https://golang.org/ref/spec)?
- Please also describe the change informally, as in a class teaching Go.
- Is this change backward compatible?
- Breaking the [Go 1 compatibility guarantee](https://golang.org/doc/go1compat) is a large cost and requires a large benefit.
- Show example code before and after the change.
- What is the cost of this proposal? (Every language change has a cost).
- How many tools (such as vet, gopls, gofmt, goimports, etc.) would be affected?
- What is the compile time cost?
- What is the run time cost?
- Can you describe a possible implementation?
- Do you have a prototype? (This is not required.)
- How would the language spec change?
- Orthogonality: how does this change interact or overlap with existing features?
- Is the goal of this change a performance improvement?
- If so, what quantifiable improvement should we expect?
- How would we measure it?
- Does this affect error handling?
- If so, how does this differ from [previous error handling proposals](https://github.com/golang/go/issues?utf8=%E2%9C%93&q=label%3Aerror-handling)?
- Is this about generics?
- If so, how does this relate to the [accepted design](https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md)
and [other generics proposals](https://github.com/golang/go/issues?utf8=%E2%9C%93&q=label%3Agenerics)?
## What to avoid
If you are unable to answer many of these questions, perhaps your
change has some of these characteristics:
- Your proposal simply changes syntax from X to Y because you prefer Y.
- You believe that Go needs feature X because other languages have it.
- You have an idea for a new feature but it is very difficult to implement.
- Your proposal states a problem rather than a solution.
Such proposals are likely to be rejected quickly.
## Thanks
We believe that following this template will save reviewer time when
considering language change proposals.
|
proposal | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/LICENSE | Copyright 2014 The Go Authors.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google LLC nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
proposal | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/codereview.cfg | issuerepo: golang/go
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/43651-type-parameters.md | # Type Parameters Proposal
Ian Lance Taylor\
Robert Griesemer\
August 20, 2021
## Status
This is the design for adding generic programming using type
parameters to the Go language.
This design has been [proposed and
accepted](https://golang.org/issue/43651) as a future language change.
We currently expect that this change will be available in the Go 1.18
release in early 2022.
## Abstract
We suggest extending the Go language to add optional type parameters
to type and function declarations.
Type parameters are constrained by interface types.
Interface types, when used as type constraints, support embedding
additional elements that may be used to limit the set of types that
satisfy the constraint.
Parameterized types and functions may use operators with type
parameters, but only when permitted by all types that satisfy the
parameter's constraint.
Type inference via a unification algorithm permits omitting type
arguments from function calls in many cases.
The design is fully backward compatible with Go 1.
## How to read this proposal
This document is long.
Here is some guidance on how to read it.
* We start with a high level overview, describing the concepts very
briefly.
* We then explain the full design starting from scratch, introducing
the details as we need them, with simple examples.
* After the design is completely described, we discuss implementation,
some issues with the design, and a comparison with other approaches
to generics.
* We then present several complete examples of how this design would
be used in practice.
* Following the examples some minor details are discussed in an
appendix.
## Very high level overview
This section explains the changes suggested by the design very
briefly.
This section is intended for people who are already familiar with how
generics would work in a language like Go.
These concepts will be explained in detail in the following sections.
* Functions can have an additional type parameter list that uses
square brackets but otherwise looks like an ordinary parameter list:
`func F[T any](p T) { ... }`.
* These type parameters can be used by the regular parameters and in
the function body.
* Types can also have a type parameter list: `type M[T any] []T`.
* Each type parameter has a type constraint, just as each ordinary
parameter has a type: `func F[T Constraint](p T) { ... }`.
* Type constraints are interface types.
* The new predeclared name `any` is a type constraint that permits any
type.
* Interface types used as type constraints can embed additional
elements to restrict the set of type arguments that satisfy the
contraint:
* an arbitrary type `T` restricts to that type
* an approximation element `~T` restricts to all types whose
underlying type is `T`
* a union element `T1 | T2 | ...` restricts to any of the listed
elements
* Generic functions may only use operations supported by all the types
permitted by the constraint.
* Using a generic function or type requires passing type arguments.
* Type inference permits omitting the type arguments of a function
call in common cases.
In the following sections we work through each of these language
changes in great detail.
You may prefer to skip ahead to the [examples](#Examples) to see what
generic code written to this design will look like in practice.
## Background
There have been many [requests to add additional support for generic
programming](https://github.com/golang/go/wiki/ExperienceReports#generics)
in Go.
There has been extensive discussion on
[the issue tracker](https://golang.org/issue/15292) and on
[a living document](https://docs.google.com/document/d/1vrAy9gMpMoS3uaVphB32uVXX4pi-HnNjkMEgyAHX4N4/view).
This design suggests extending the Go language to add a form of
parametric polymorphism, where the type parameters are bounded not by
a declared subtyping relationship (as in some object oriented
languages) but by explicitly defined structural constraints.
This version of the design has many similarities to a design draft
presented on July 31, 2019, but contracts have been removed and
replaced by interface types, and the syntax has changed.
There have been several proposals for adding type parameters, which
can be found through the links above.
Many of the ideas presented here have appeared before.
The main new features described here are the syntax and the careful
examination of interface types as constraints.
This design does not support template metaprogramming or any other
form of compile time programming.
As the term _generic_ is widely used in the Go community, we will
use it below as a shorthand to mean a function or type that takes type
parameters.
Don't confuse the term generic as used in this design with the same
term in other languages like C++, C#, Java, or Rust; they have
similarities but are not the same.
## Design
We will describe the complete design in stages based on simple
examples.
### Type parameters
Generic code is written using abstract data types that we call _type
parameters_.
When running the generic code, the type parameters are replaced by
_type arguments_.
Here is a function that prints out each element of a slice, where the
element type of the slice, here called `T`, is unknown.
This is a trivial example of the kind of function we want to permit in
order to support generic programming.
(Later we'll also discuss [generic types](#Generic-types)).
```
// Print prints the elements of a slice.
// It should be possible to call this with any slice value.
func Print(s []T) { // Just an example, not the suggested syntax.
for _, v := range s {
fmt.Println(v)
}
}
```
With this approach, the first decision to make is: how should the type
parameter `T` be declared?
In a language like Go, we expect every identifier to be declared in
some way.
Here we make a design decision: type parameters are similar to
ordinary non-type function parameters, and as such should be listed
along with other parameters.
However, type parameters are not the same as non-type parameters, so
although they appear in the list of parameters we want to distinguish
them.
That leads to our next design decision: we define an additional
optional parameter list describing type parameters.
This type parameter list appears before the regular parameters.
To distinguish the type parameter list from the regular parameter
list, the type parameter list uses square brackets rather than
parentheses.
Just as regular parameters have types, type parameters have
meta-types, also known as constraints.
We will discuss the details of constraints later; for now, we will
just note that `any` is a valid constraint, meaning that any type is
permitted.
```
// Print prints the elements of any slice.
// Print has a type parameter T and has a single (non-type)
// parameter s which is a slice of that type parameter.
func Print[T any](s []T) {
// same as above
}
```
This says that within the function `Print` the identifier `T` is a
type parameter, a type that is currently unknown but that will be
known when the function is called.
The `any` means that `T` can be any type at all.
As seen above, the type parameter may be used as a type when
describing the types of the ordinary non-type parameters.
It may also be used as a type within the body of the function.
Unlike regular parameter lists, in type parameter lists names are
required for the type parameters.
This avoids a syntactic ambiguity, and, as it happens, there is no
reason to ever omit the type parameter names.
Since `Print` has a type parameter, any call of `Print` must provide a
type argument.
Later we will see how this type argument can usually be deduced from
the non-type argument, by using [type inference](#Type-inference).
For now, we'll pass the type argument explicitly.
Type arguments are passed much like type parameters are declared: as a
separate list of arguments.
As with the type parameter list, the list of type arguments uses
square brackets.
```
// Call Print with a []int.
// Print has a type parameter T, and we want to pass a []int,
// so we pass a type argument of int by writing Print[int].
// The function Print[int] expects a []int as an argument.
Print[int]([]int{1, 2, 3})
// This will print:
// 1
// 2
// 3
```
### Constraints
Let's make our example slightly more complicated.
Let's turn it into a function that converts a slice of any type into a
`[]string` by calling a `String` method on each element.
```
// This function is INVALID.
func Stringify[T any](s []T) (ret []string) {
for _, v := range s {
ret = append(ret, v.String()) // INVALID
}
return ret
}
```
This might seem OK at first glance, but in this example `v` has type
`T`, and `T` can be any type.
This means that `T` need not have a `String` method.
So the call to `v.String()` is invalid.
Naturally, the same issue arises in other languages that support
generic programming.
In C++, for example, a generic function (in C++ terms, a function
template) can call any method on a value of generic type.
That is, in the C++ approach, calling `v.String()` is fine.
If the function is called with a type argument that does not have a
`String` method, the error is reported when compiling the call to
`v.String` with that type argument.
These errors can be lengthy, as there may be several layers of generic
function calls before the error occurs, all of which must be reported
to understand what went wrong.
The C++ approach would be a poor choice for Go.
One reason is the style of the language.
In Go we don't refer to names, such as, in this case, `String`, and
hope that they exist.
Go resolves all names to their declarations when they are seen.
Another reason is that Go is designed to support programming at
scale.
We must consider the case in which the generic function definition
(`Stringify`, above) and the call to the generic function (not shown,
but perhaps in some other package) are far apart.
In general, all generic code expects the type arguments to meet
certain requirements.
We refer to these requirements as _constraints_ (other languages have
similar ideas known as type bounds or trait bounds or concepts).
In this case, the constraint is pretty obvious: the type has to have a
`String() string` method.
In other cases it may be much less obvious.
We don't want to derive the constraints from whatever `Stringify`
happens to do (in this case, call the `String` method).
If we did, a minor change to `Stringify` might change the
constraints.
That would mean that a minor change could cause code far away, that
calls the function, to unexpectedly break.
It's fine for `Stringify` to deliberately change its constraints, and
force callers to change.
What we want to avoid is `Stringify` changing its constraints
accidentally.
This means that the constraints must set limits on both the type
arguments passed by the caller and the code in the generic function.
The caller may only pass type arguments that satisfy the constraints.
The generic function may only use those values in ways that are
permitted by the constraints.
This is an important rule that we believe should apply to any attempt
to define generic programming in Go: generic code can only use
operations that its type arguments are known to implement.
### Operations permitted for any type
Before we discuss constraints further, let's briefly note what happens
when the constraint is `any`.
If a generic function uses the `any` constraint for a type parameter,
as is the case for the `Print` method above, then any type argument is
permitted for that parameter.
The only operations that the generic function can use with values of
that type parameter are those operations that are permitted for values
of any type.
In the example above, the `Print` function declares a variable `v`
whose type is the type parameter `T`, and it passes that variable to a
function.
The operations permitted for any type are:
* declare variables of those types
* assign other values of the same type to those variables
* pass those variables to functions or return them from functions
* take the address of those variables
* convert or assign values of those types to the type `interface{}`
* convert a value of type `T` to type `T` (permitted but useless)
* use a type assertion to convert an interface value to the type
* use the type as a case in a type switch
* define and use composite types that use those types, such as a slice
of that type
* pass the type to some predeclared functions such as `new`
It's possible that future language changes will add other such
operations, though none are currently anticipated.
### Defining constraints
Go already has a construct that is close to what we need for a
constraint: an interface type.
An interface type is a set of methods.
The only values that can be assigned to a variable of interface type
are those whose types implement the same methods.
The only operations that can be done with a value of interface type,
other than operations permitted for any type, are to call the
methods.
Calling a generic function with a type argument is similar to
assigning to a variable of interface type: the type argument must
implement the constraints of the type parameter.
Writing a generic function is like using values of interface type: the
generic code can only use the operations permitted by the constraint
(or operations that are permitted for any type).
Therefore, in this design, constraints are simply interface types.
Satisfying a constraint means implementing the interface type.
(Later we'll restate this in order to define constraints for
operations other than method calls, such as [binary operators](#Operators)).
For the `Stringify` example, we need an interface type with a `String`
method that takes no arguments and returns a value of type `string`.
```
// Stringer is a type constraint that requires the type argument to have
// a String method and permits the generic function to call String.
// The String method should return a string representation of the value.
type Stringer interface {
String() string
}
```
(It doesn't matter for this discussion, but this defines the same
interface as the standard library's `fmt.Stringer` type, and real
code would likely simply use `fmt.Stringer`.)
### The `any` constraint
Now that we know that constraints are simply interface types, we can
explain what `any` means as a constraint.
As shown above, the `any` constraint permits any type as a type
argument and only permits the function to use the operations permitted
for any type.
The interface type for that is the empty interface: `interface{}`.
So we could write the `Print` example as
```
// Print prints the elements of any slice.
// Print has a type parameter T and has a single (non-type)
// parameter s which is a slice of that type parameter.
func Print[T interface{}](s []T) {
// same as above
}
```
However, it's tedious to have to write `interface{}` every time you
write a generic function that doesn't impose constraints on its type
parameters.
So in this design we suggest a type constraint `any` that is
equivalent to `interface{}`.
This will be a predeclared name, implicitly declared in the universe
block.
It will not be valid to use `any` as anything other than a type
constraint.
(Note: clearly we could make `any` generally available as an alias for
`interface{}`, or as a new defined type defined as `interface{}`.
However, we don't want this design, which is about generics, to lead
to a possibly significant change to non-generic code.
Adding `any` as a general purpose name for `interface{}` can and
should be [discussed separately](https://golang.org/issue/33232)).
### Using a constraint
For a generic function, a constraint can be thought of as the type of
the type argument: a meta-type.
As shown above, constraints appear in the type parameter list as the
meta-type of a type parameter.
```
// Stringify calls the String method on each element of s,
// and returns the results.
func Stringify[T Stringer](s []T) (ret []string) {
for _, v := range s {
ret = append(ret, v.String())
}
return ret
}
```
The single type parameter `T` is followed by the constraint that
applies to `T`, in this case `Stringer`.
### Multiple type parameters
Although the `Stringify` example uses only a single type parameter,
functions may have multiple type parameters.
```
// Print2 has two type parameters and two non-type parameters.
func Print2[T1, T2 any](s1 []T1, s2 []T2) { ... }
```
Compare this to
```
// Print2Same has one type parameter and two non-type parameters.
func Print2Same[T any](s1 []T, s2 []T) { ... }
```
In `Print2` `s1` and `s2` may be slices of different types.
In `Print2Same` `s1` and `s2` must be slices of the same element
type.
Just as each ordinary parameter may have its own type, each type
parameter may have its own constraint.
```
// Stringer is a type constraint that requires a String method.
// The String method should return a string representation of the value.
type Stringer interface {
String() string
}
// Plusser is a type constraint that requires a Plus method.
// The Plus method is expected to add the argument to an internal
// string and return the result.
type Plusser interface {
Plus(string) string
}
// ConcatTo takes a slice of elements with a String method and a slice
// of elements with a Plus method. The slices should have the same
// number of elements. This will convert each element of s to a string,
// pass it to the Plus method of the corresponding element of p,
// and return a slice of the resulting strings.
func ConcatTo[S Stringer, P Plusser](s []S, p []P) []string {
r := make([]string, len(s))
for i, v := range s {
r[i] = p[i].Plus(v.String())
}
return r
}
```
A single constraint can be used for multiple type parameters, just as
a single type can be used for multiple non-type function parameters.
The constraint applies to each type parameter separately.
```
// Stringify2 converts two slices of different types to strings,
// and returns the concatenation of all the strings.
func Stringify2[T1, T2 Stringer](s1 []T1, s2 []T2) string {
r := ""
for _, v1 := range s1 {
r += v1.String()
}
for _, v2 := range s2 {
r += v2.String()
}
return r
}
```
### Generic types
We want more than just generic functions: we also want generic types.
We suggest that types be extended to take type parameters.
```
// Vector is a name for a slice of any element type.
type Vector[T any] []T
```
A type's type parameters are just like a function's type parameters.
Within the type definition, the type parameters may be used like any
other type.
To use a generic type, you must supply type arguments.
This is called _instantiation_.
The type arguments appear in square brackets, as usual.
When we instantiate a type by supplying type arguments for the type
parameters, we produce a type in which each use of a type parameter in
the type definition is replaced by the corresponding type argument.
```
// v is a Vector of int values.
//
// This is similar to pretending that "Vector[int]" is a valid identifier,
// and writing
// type "Vector[int]" []int
// var v "Vector[int]"
// All uses of Vector[int] will refer to the same "Vector[int]" type.
//
var v Vector[int]
```
Generic types can have methods.
The receiver type of a method must declare the same number of type
parameters as are declared in the receiver type's definition.
They are declared without any constraint.
```
// Push adds a value to the end of a vector.
func (v *Vector[T]) Push(x T) { *v = append(*v, x) }
```
The type parameters listed in a method declaration need not have the
same names as the type parameters in the type declaration.
In particular, if they are not used by the method, they can be `_`.
A generic type can refer to itself in cases where a type can
ordinarily refer to itself, but when it does so the type arguments
must be the type parameters, listed in the same order.
This restriction prevents infinite recursion of type instantiation.
```
// List is a linked list of values of type T.
type List[T any] struct {
next *List[T] // this reference to List[T] is OK
val T
}
// This type is INVALID.
type P[T1, T2 any] struct {
F *P[T2, T1] // INVALID; must be [T1, T2]
}
```
This restriction applies to both direct and indirect references.
```
// ListHead is the head of a linked list.
type ListHead[T any] struct {
head *ListElement[T]
}
// ListElement is an element in a linked list with a head.
// Each element points back to the head.
type ListElement[T any] struct {
next *ListElement[T]
val T
// Using ListHead[T] here is OK.
// ListHead[T] refers to ListElement[T] refers to ListHead[T].
// Using ListHead[int] would not be OK, as ListHead[T]
// would have an indirect reference to ListHead[int].
head *ListHead[T]
}
```
(Note: with more understanding of how people want to write code, it
may be possible to relax this rule to permit some cases that use
different type arguments.)
The type parameter of a generic type may have constraints other than
`any`.
```
// StringableVector is a slice of some type, where the type
// must have a String method.
type StringableVector[T Stringer] []T
func (s StringableVector[T]) String() string {
var sb strings.Builder
for i, v := range s {
if i > 0 {
sb.WriteString(", ")
}
// It's OK to call v.String here because v is of type T
// and T's constraint is Stringer.
sb.WriteString(v.String())
}
return sb.String()
}
```
### Methods may not take additional type arguments
Although methods of a generic type may use the type's parameters,
methods may not themselves have additional type parameters.
Where it would be useful to add type arguments to a method, people
will have to write a suitably parameterized top-level function.
There is more discussion of this in [the issues
section](#No-parameterized-methods).
### Operators
As we've seen, we are using interface types as constraints.
Interface types provide a set of methods, and nothing else.
This means that with what we've seen so far, the only thing that
generic functions can do with values of type parameters, other than
operations that are permitted for any type, is call methods.
However, method calls are not sufficient for everything we want to
express.
Consider this simple function that returns the smallest element of a
slice of values, where the slice is assumed to be non-empty.
```
// This function is INVALID.
func Smallest[T any](s []T) T {
r := s[0] // panic if slice is empty
for _, v := range s[1:] {
if v < r { // INVALID
r = v
}
}
return r
}
```
Any reasonable generics implementation should let you write this
function.
The problem is the expression `v < r`.
This assumes that `T` supports the `<` operator, but the constraint on
`T` is simply `any`.
With the `any` constraint the function `Smallest` can only use
operations that are available for all types, but not all Go types
support `<`.
Unfortunately, since `<` is not a method, there is no obvious way to
write a constraint—an interface type—that permits `<`.
We need a way to write a constraint that accepts only types that
support `<`.
In order to do that, we observe that, aside from two exceptions that
we will discuss later, all the arithmetic, comparison, and logical
operators defined by the language may only be used with types that are
predeclared by the language, or with defined types whose underlying
type is one of those predeclared types.
That is, the operator `<` can only be used with a predeclared type
such as `int` or `float64`, or a defined type whose underlying type is
one of those types.
Go does not permit using `<` with a composite type or with an
arbitrary defined type.
This means that rather than try to write a constraint for `<`, we can
approach this the other way around: instead of saying which operators
a constraint should support, we can say which types a constraint
should accept.
We do this by defining a _type set_ for a constraint.
#### Type sets
Although we are primarily interested in defining the type set of
constraints, the most straightforward approach is to define the type
set of all types.
The type set of a constraint is then constructed out of the type sets
of its elements.
This may seem like a digression from the topic of using operators with
parameterized types, but we'll get there in the end.
Every type has an associated type set.
The type set of a non-interface type `T` is simply the set `{T}`: a
set that contains just `T` itself.
The type set of an ordinary interface type is the set of all types
that declare all the methods of the interface.
Note that the type set of an ordinary interface type is an infinite
set.
For any given type `T` and interface type `IT` it's easy to tell
whether `T` is in the type set of `IT` (by checking if all methods of
`IT` are declared by `T`), but there is no reasonable way to enumerate
all the types in the type set of `IT`.
The type `IT` is a member of its own type set, because an interface
inherently declares all of its own methods.
The type set of the empty interface `interface{}` is the set of all
possible types.
It will be useful to construct the type set of an interface type by
looking at the elements of the interface.
This will produce the same result in a different way.
The elements of an interface can be either a method signature or an
embedded interface type.
Although a method signature is not a type, it's convenient to define a
type set for it: the set of all types that declare that method.
The type set of an embedded interface type `E` is simply that of `E`:
the set of all types that declare all the methods of `E`.
For any method signature `M`, the type set of `interface{ M }` is
the type of `M`: the set of all types that declare `M`.
For any method signatures `M1` and `M2`, the type set of `interface{
M1; M2 }` is set of all types that declare both `M1` and `M2`.
This is the intersection of the type set of `M1` and the type set of
`M2`.
To see this, observe that the type set of `M1` is the set of all types
with a method `M1`, and similarly for `M2`.
If we take the intersection of those two type sets, the result is the
set of all types that declare both `M1` and `M2`.
That is exactly the type set of `interface{ M1; M2 }`.
The same applies to embedded interface types.
For any two interface types `E1` and `E2`, the type set of `interface{
E1; E2 }` is the intersection of the type sets of `E1` and `E2`.
Therefore, the type set of an interface type is the intersection of
the type sets of the element of the interface.
#### Type sets of constraints
Now that we have described the type set of an interface type, we will
redefine what it means to satisfy the constraint.
Earlier we said that a type argument satisfies a constraint if it
implements the constraint.
Now we will say that a type argument satisfies a constraint if it is a
member of the constraint's type set.
For an ordinary interface type, one whose only elements are method
signatures and embedded ordinary interface types, the meaning is
exactly the same: the set of types that implement the interface type
is exactly the set of types that are in its type set.
We will now proceed to define additional elements that may appear in
an interface type that is used as a constraint, and define how those
additional elements can be used to further control the type set of the
constraint.
#### Constraint elements
The elements of an ordinary interface type are method signatures and
embedded interface types.
We propose permitting three additional elements that may be used in an
interface type used as a constraint.
If any of these additional elements are used, the interface type may
not be used as an ordinary type, but may only be used as a constraint.
##### Arbitrary type constraint element
The first new element is to simply permit listing any type, not just
an interface type.
For example: `type Integer interface{ int }`.
When a non-interface type `T` is listed as an element of a constraint,
its type set is simply `{T}`.
The type set of `int` is `{int}`.
Since the type set of a constraint is the intersection of the type
sets of all elements, the type set of `Integer` is also `{int}`.
This constraint `Integer` can be satisfied by any type that is a
member of the set `{int}`.
There is exactly one such type: `int`.
The type may be a type literal that refers to a type parameter (or
more than one), but it may not be a plain type parameter.
```
// EmbeddedParameter is INVALID.
type EmbeddedParameter[T any] interface {
T // INVALID: may not list a plain type parameter
}
```
##### Approximation constraint element
Listing a single type is useless by itself.
For constraint satisfaction, we want to be able to say not just `int`,
but "any type whose underlying type is `int`".
Consider the `Smallest` example above.
We want it to work not just for slices of the predeclared ordered
types, but also for types defined by a program.
If a program uses `type MyString string`, the program can use the `<`
operator with values of type `MyString`.
It should be possible to instantiate `Smallest` with the type
`MyString`.
To support this, the second new element we permit in a constraint is a
new syntactic construct: an approximation element, written as `~T`.
The type set of `~T` is the set of all types whose underlying type is
`T`.
For example: `type AnyString interface{ ~string }`.
The type set of `~string`, and therefore the type set of `AnyString`,
is the set of all types whose underlying type is `string`.
That includes the type `MyString`; `MyString` used as a type argument
will satisfy the constraint `AnyString`.
This new `~T` syntax will be the first use of `~` as a token in Go.
Since `~T` means the set of all types whose underlying type is `T`, it
will be an error to use `~T` with a type `T` whose underlying type is
not itself.
Types whose underlying types are themselves are:
1. Type literals, such as `[]byte` or `struct{ f int }`.
2. Most predeclared types, such as `int` or `string` (but not
`error`).
Using `~T` is not permitted if `T` is a type parameter or if `T` is an
interface type.
```
type MyString string
// AnyString matches any type whose underlying type is string.
// This includes, among others, the type string itself, and
// the type MyString.
type AnyString interface {
~string
}
// ApproximateMyString is INVALID.
type ApproximateMyString interface {
~MyString // INVALID: underlying type of MyString is not MyString
}
// ApproximateParameter is INVALID.
type ApproximateParameter[T any] interface {
~T // INVALID: T is a type parameter
}
```
##### Union constraint element
The third new element we permit in a constraint is also a new
syntactic construct: a union element, written as a series of
constraint elements separated by vertical bars (`|`).
For example: `int | float32` or `~int8 | ~int16 | ~int32 | ~int64`.
The type set of a union element is the union of the type sets of each
element in the sequence.
The elements listed in a union must all be different.
For example:
```
// PredeclaredSignedInteger is a constraint that matches the
// five predeclared signed integer types.
type PredeclaredSignedInteger interface {
int | int8 | int16 | int32 | int64
}
```
The type set of this union element is the set `{int, int8, int16,
int32, int64}`.
Since the union is the only element of `PredeclaredSignedInteger`,
that is also the type set of `PredeclaredSignedInteger`.
This constraint can be satisfied by any of those five types.
Here is an example using approximation elements:
```
// SignedInteger is a constraint that matches any signed integer type.
type SignedInteger interface {
~int | ~int8 | ~int16 | ~int32 | ~int64
}
```
The type set of this constraint is the set of all types whose
underlying type is one of `int`, `int8`, `int16`, `int32`, or
`int64`.
Any of those types will satisfy this constraint.
The new constraint element syntax is
```
InterfaceType = "interface" "{" {(MethodSpec | InterfaceTypeName | ConstraintElem) ";" } "}" .
ConstraintElem = ConstraintTerm { "|" ConstraintTerm } .
ConstraintTerm = ["~"] Type .
```
#### Operations based on type sets
The purpose of type sets is to permit generic functions to use
operators, such as `<`, with values whose type is a type parameter.
The rule is that a generic function may use a value whose type is a
type parameter in any way that is permitted by every member of the
type set of the parameter's constraint.
This applies to operators like '<' or '+' or other general operators.
For special purpose operators like `range` loops, we permit their use
if the type parameter has a structural constraint, as [defined
later](#Constraint-type-inference); the definition here is basically
that the constraint has a single underlying type.
If the function can be compiled successfully using each type in the
constraint's type set, or when applicable using the structural type,
then the use is permitted.
For the `Smallest` example shown earlier, we could use a constraint
like this:
```
package constraints
// Ordered is a type constraint that matches any ordered type.
// An ordered type is one that supports the <, <=, >, and >= operators.
type Ordered interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr |
~float32 | ~float64 |
~string
}
```
In practice this constraint would likely be defined and exported in a
new standard library package, `constraints`, so that it could be used
by function and type definitions.
Given that constraint, we can write this function, now valid:
```
// Smallest returns the smallest element in a slice.
// It panics if the slice is empty.
func Smallest[T constraints.Ordered](s []T) T {
r := s[0] // panics if slice is empty
for _, v := range s[1:] {
if v < r {
r = v
}
}
return r
}
```
#### Comparable types in constraints
Earlier we mentioned that there are two exceptions to the rule that
operators may only be used with types that are predeclared by the
language.
The exceptions are `==` and `!=`, which are permitted for struct,
array, and interface types.
These are useful enough that we want to be able to write a constraint
that accepts any comparable type.
To do this we introduce a new predeclared type constraint:
`comparable`.
The type set of the `comparable` constraint is the set of all
comparable types.
This permits the use of `==` and `!=` with values of that type
parameter.
For example, this function may be instantiated with any comparable
type:
```
// Index returns the index of x in s, or -1 if not found.
func Index[T comparable](s []T, x T) int {
for i, v := range s {
// v and x are type T, which has the comparable
// constraint, so we can use == here.
if v == x {
return i
}
}
return -1
}
```
Since `comparable` is a constraint, it can be embedded in another
interface type used as a constraint.
```
// ComparableHasher is a type constraint that matches all
// comparable types with a Hash method.
type ComparableHasher interface {
comparable
Hash() uintptr
}
```
The constraint `ComparableHasher` is implemented by any type that is
comparable and also has a `Hash() uintptr` method.
A generic function that uses `ComparableHasher` as a constraint can
compare values of that type and can call the `Hash` method.
It's possible to use `comparable` to produce a constraint that can not
be satisfied by any type.
See also the [discussion of empty type sets below](#Empty-type-sets).
```
// ImpossibleConstraint is a type constraint that no type can satisfy,
// because slice types are not comparable.
type ImpossibleConstraint interface {
comparable
[]int
}
```
### Mutually referencing type parameters
Within a single type parameter list, constraints may refer to any of
the other type parameters, even ones that are declared later in the
same list.
(The scope of a type parameter starts at the beginning of the type
parameter list and extends to the end of the enclosing function or
type declaration.)
For example, consider a generic graph package that contains generic
algorithms that work with graphs.
The algorithms use two types, `Node` and `Edge`.
`Node` is expected to have a method `Edges() []Edge`.
`Edge` is expected to have a method `Nodes() (Node, Node)`.
A graph can be represented as a `[]Node`.
This simple representation is enough to implement graph algorithms
like finding the shortest path.
```
package graph
// NodeConstraint is the type constraint for graph nodes:
// they must have an Edges method that returns the Edge's
// that connect to this Node.
type NodeConstraint[Edge any] interface {
Edges() []Edge
}
// EdgeConstraint is the type constraint for graph edges:
// they must have a Nodes method that returns the two Nodes
// that this edge connects.
type EdgeConstraint[Node any] interface {
Nodes() (from, to Node)
}
// Graph is a graph composed of nodes and edges.
type Graph[Node NodeConstraint[Edge], Edge EdgeConstraint[Node]] struct { ... }
// New returns a new graph given a list of nodes.
func New[Node NodeConstraint[Edge], Edge EdgeConstraint[Node]] (nodes []Node) *Graph[Node, Edge] {
...
}
// ShortestPath returns the shortest path between two nodes,
// as a list of edges.
func (g *Graph[Node, Edge]) ShortestPath(from, to Node) []Edge { ... }
```
There are a lot of type arguments and instantiations here.
In the constraint on `Node` in `Graph`, the `Edge` being passed to the
type constraint `NodeConstraint` is the second type parameter of
`Graph`.
This instantiates `NodeConstraint` with the type parameter `Edge`, so
we see that `Node` must have a method `Edges` that returns a slice of
`Edge`, which is what we want.
The same applies to the constraint on `Edge`, and the same type
parameters and constraints are repeated for the function `New`.
We aren't claiming that this is simple, but we are claiming that it is
possible.
It's worth noting that while at first glance this may look like a
typical use of interface types, `Node` and `Edge` are non-interface
types with specific methods.
In order to use `graph.Graph`, the type arguments used for `Node` and
`Edge` have to define methods that follow a certain pattern, but they
don't have to actually use interface types to do so.
In particular, the methods do not return interface types.
For example, consider these type definitions in some other package:
```
// Vertex is a node in a graph.
type Vertex struct { ... }
// Edges returns the edges connected to v.
func (v *Vertex) Edges() []*FromTo { ... }
// FromTo is an edge in a graph.
type FromTo struct { ... }
// Nodes returns the nodes that ft connects.
func (ft *FromTo) Nodes() (*Vertex, *Vertex) { ... }
```
There are no interface types here, but we can instantiate
`graph.Graph` using the type arguments `*Vertex` and `*FromTo`.
```
var g = graph.New[*Vertex, *FromTo]([]*Vertex{ ... })
```
`*Vertex` and `*FromTo` are not interface types, but when used
together they define methods that implement the constraints of
`graph.Graph`.
Note that we couldn't pass plain `Vertex` or `FromTo` to `graph.New`,
since `Vertex` and `FromTo` do not implement the constraints.
The `Edges` and `Nodes` methods are defined on the pointer types
`*Vertex` and `*FromTo`; the types `Vertex` and `FromTo` do not have
any methods.
When we use a generic interface type as a constraint, we first
instantiate the type with the type argument(s) supplied in the type
parameter list, and then compare the corresponding type argument
against the instantiated constraint.
In this example, the `Node` type argument to `graph.New` has a
constraint `NodeConstraint[Edge]`.
When we call `graph.New` with a `Node` type argument of `*Vertex` and
an `Edge` type argument of `*FromTo`, in order to check the constraint
on `Node` the compiler instantiates `NodeConstraint` with the type
argument `*FromTo`.
That produces an instantiated constraint, in this case a requirement
that `Node` have a method `Edges() []*FromTo`, and the compiler
verifies that `*Vertex` satisfies that constraint.
Although `Node` and `Edge` do not have to be instantiated with
interface types, it is also OK to use interface types if you like.
```
type NodeInterface interface { Edges() []EdgeInterface }
type EdgeInterface interface { Nodes() (NodeInterface, NodeInterface) }
```
We could instantiate `graph.Graph` with the types `NodeInterface` and
`EdgeInterface`, since they implement the type constraints.
There isn't much reason to instantiate a type this way, but it is
permitted.
This ability for type parameters to refer to other type parameters
illustrates an important point: it should be a requirement for any
attempt to add generics to Go that it be possible to instantiate
generic code with multiple type arguments that refer to each other in
ways that the compiler can check.
### Type inference
In many cases we can use type inference to avoid having to explicitly
write out some or all of the type arguments.
We can use _function argument type inference_ for a function call to
deduce type arguments from the types of the non-type arguments.
We can use _constraint type inference_ to deduce unknown type arguments
from known type arguments.
In the examples above, when instantiating a generic function or type,
we always specified type arguments for all the type parameters.
We also permit specifying just some of the type arguments, or omitting
the type arguments entirely, when the missing type arguments can be
inferred.
When only some type arguments are passed, they are the arguments for
the first type parameters in the list.
For example, a function like this:
```
func Map[F, T any](s []F, f func(F) T) []T { ... }
```
can be called in these ways. (We'll explain below how type inference
works in detail; this example is to show how an incomplete list of
type arguments is handled.)
```
var s []int
f := func(i int) int64 { return int64(i) }
var r []int64
// Specify both type arguments explicitly.
r = Map[int, int64](s, f)
// Specify just the first type argument, for F,
// and let T be inferred.
r = Map[int](s, f)
// Don't specify any type arguments, and let both be inferred.
r = Map(s, f)
```
If a generic function or type is used without specifying all the type
arguments, it is an error if any of the unspecified type arguments
cannot be inferred.
(Note: type inference is a convenience feature.
Although we think it is an important feature, it does not add any
functionality to the design, only convenience in using it.
It would be possible to omit it from the initial implementation, and
see whether it seems to be needed.
That said, this feature doesn't require additional syntax, and
produces more readable code.)
#### Type unification
Type inference is based on _type unification_.
Type unification applies to two types, either or both of which may be
or contain type parameters.
Type unification works by comparing the structure of the types.
Their structure disregarding type parameters must be identical, and
types other than type parameters must be equivalent.
A type parameter in one type may match any complete subtype in the
other type.
If the structure differs, or types other than type parameters are not
equivalent, then type unification fails.
A successful type unification provides a list of associations of type
parameters with other types (which may themselves be or contain type
parameters).
For type unification, two types that don't contain any type parameters
are equivalent if they are
[identical](https://golang.org/ref/spec#Type_identity), or if they are
channel types that are identical ignoring channel direction, or if
their underlying types are equivalent.
It's OK to permit types to not be identical during type inference,
because we will still check the constraints if inference succeeds, and
we will still check that the function arguments are assignable to the
inferred types.
For example, if `T1` and `T2` are type parameters, `[]map[int]bool`
can be unified with any of the following:
* `[]map[int]bool`
* `T1` (`T1` matches `[]map[int]bool`)
* `[]T1` (`T1` matches `map[int]bool`)
* `[]map[T1]T2` (`T1` matches `int`, `T2` matches `bool`)
(This is not an exclusive list, there are other possible successful
unifications.)
On the other hand, `[]map[int]bool` cannot be unified with any of
* `int`
* `struct{}`
* `[]struct{}`
* `[]map[T1]string`
(This list is of course also not exclusive; there are an infinite
number of types that cannot be successfully unified.)
In general we can also have type parameters on both sides, so in some
cases we might associate `T1` with, for example, `T2`, or `[]T2`.
#### Function argument type inference
Function argument type inference is used with a function call to infer
type arguments from non-type arguments.
Function argument type inference is not used when a type is
instantiated, and it is not used when a function is instantiated but
not called.
To see how it works, let's go back to [the example](#Type-parameters)
of a call to the simple `Print` function:
```
Print[int]([]int{1, 2, 3})
```
The type argument `int` in this function call can be inferred from the
type of the non-type argument.
The only type arguments that can be inferred are those that are used
for the types of the function's (non-type) input parameters.
If there are some type parameters that are used only for the
function's result parameter types, or only in the body of the
function, then those type arguments cannot be inferred using function
argument type inference.
To infer function type arguments, we unify the types of the function
call arguments with the types of the function's non-type parameters.
On the caller side we have the list of types of the actual (non-type)
arguments, which for the `Print` example is simply `[]int`.
On the function side is the list of the types of the function's
non-type parameters, which for `Print` is `[]T`.
In the lists, we discard respective arguments for which the function
side does not use a type parameter.
We must then apply type unification to the remaining argument types.
Function argument type inference is a two-pass algorithm.
In the first pass, we ignore untyped constants on the caller side and
their corresponding types in the function definition.
We use two passes so that in some cases later arguments can determine
the type of an untyped constant.
We unify corresponding types in the lists.
This will give us an association of type parameters on the function
side to types on the caller side.
If the same type parameter appears more than once on the function
side, it will match multiple argument types on the caller side.
If those caller types are not equivalent, we report an error.
After the first pass, we check any untyped constants on the caller
side.
If there are no untyped constants, or if the type parameters in the
corresponding function types have matched other input types, then
type unification is complete.
Otherwise, for the second pass, for any untyped constants whose
corresponding function types are not yet set, we determine the default
type of the untyped constant in [the usual
way](https://golang.org/ref/spec#Constants).
Then we unify the remaining types again, this time with no untyped
constants.
When constraint type inference is possible, as described below, it is
applied between the two passes.
In this example
```
s1 := []int{1, 2, 3}
Print(s1)
```
we compare `[]int` with `[]T`, match `T` with `int`, and we are done.
The single type parameter `T` is `int`, so we infer that the call to
`Print` is really a call to `Print[int]`.
For a more complex example, consider
```
// Map calls the function f on every element of the slice s,
// returning a new slice of the results.
func Map[F, T any](s []F, f func(F) T) []T {
r := make([]T, len(s))
for i, v := range s {
r[i] = f(v)
}
return r
}
```
The two type parameters `F` and `T` are both used for input
parameters, so function argument type inference is possible.
In the call
```
strs := Map([]int{1, 2, 3}, strconv.Itoa)
```
we unify `[]int` with `[]F`, matching `F` with `int`.
We unify the type of `strconv.Itoa`, which is `func(int) string`,
with `func(F) T`, matching `F` with `int` and `T` with `string`.
The type parameter `F` is matched twice, both times with `int`.
Unification succeeds, so the call written as `Map` is a call of
`Map[int, string]`.
To see the untyped constant rule in effect, consider:
```
// NewPair returns a pair of values of the same type.
func NewPair[F any](f1, f2 F) *Pair[F] { ... }
```
In the call `NewPair(1, 2)` both arguments are untyped constants, so
both are ignored in the first pass.
There is nothing to unify.
We still have two untyped constants after the first pass.
Both are set to their default type, `int`.
The second run of the type unification pass unifies `F` with
`int`, so the final call is `NewPair[int](1, 2)`.
In the call `NewPair(1, int64(2))` the first argument is an untyped
constant, so we ignore it in the first pass.
We then unify `int64` with `F`.
At this point the type parameter corresponding to the untyped constant
is fully determined, so the final call is `NewPair[int64](1,
int64(2))`.
In the call `NewPair(1, 2.5)` both arguments are untyped constants,
so we move on the second pass.
This time we set the first constant to `int` and the second to
`float64`.
We then try to unify `F` with both `int` and `float64`, so unification
fails, and we report a compilation error.
As mentioned earlier, function argument type inference is done without
regard to constraints.
First we use function argument type inference to determine type
arguments to use for the function, and then, if that succeeds, we
check whether those type arguments implement the constraints (if
any).
Note that after successful function argument type inference, the
compiler must still check that the arguments can be assigned to the
parameters, as for any function call.
#### Constraint type inference
Constraint type inference permits inferring a type argument from
another type argument, based on type parameter constraints.
Constraint type inference is useful when a function wants to have a
type name for an element of some other type parameter, or when a
function wants to apply a constraint to a type that is based on some
other type parameter.
Constraint type inference can only infer types if some type parameter
has a constraint that has a type set with exactly one type in it, or a
type set for which the underlying type of every type in the type set
is the same type.
The two cases are slightly different, as in the first case, in which
the type set has exactly one type, the single type need not be its own
underlying type.
Either way, the single type is called a _structural type_, and the
constraint is called a _structural constraint_.
The structural type describes the required structure of the type
parameter.
A structural constraint may also define methods, but the methods are
ignored by constraint type inference.
For constraint type inference to be useful, the structural type will
normally be defined using one or more type parameters.
Constraint type inference is only tried if there is at least one type
parameter whose type argument is not yet known.
While the algorithm we describe here may seem complex, for typical
concrete examples it is straightforward to see what constraint type
inference will deduce.
The description of the algorithm is followed by a couple of examples.
We start by creating a mapping from type parameters to type arguments.
We initialize the mapping with all type parameters whose type
arguments are already known, if any.
For each type parameter with a structural constraint, we unify the
type parameter with the structural type.
This will have the effect of associating the type parameter with its
constraint.
We add the result into the mapping we are maintaining.
If unification finds any associations of type parameters, we add those
to the mapping as well.
When we find multiple associations of any one type parameter, we unify
each such association to produce a single mapping entry.
If a type parameter is associated directly with another type
parameter, meaning that they must both be matched with the same type,
we unify the associations of each parameter together.
If any of these various unifications fail, then constraint type
inference fails.
After merging all type parameters with structural constraints, we have
a mapping of various type parameters to types (which may be or contain
other type parameters).
We continue by looking for a type parameter `T` that is mapped to a
fully known type argument `A`, one that does not contain any type
parameters.
Anywhere that `T` appears in a type argument in the mapping, we
replace `T` with `A`.
We repeat this process until we have replaced every type parameter.
When constraint type inference is possible, type inference proceeds as
followed:
* Build the mapping using known type arguments.
* Apply constraint type inference.
* Apply function type inference using typed arguments.
* Apply constraint type inference again.
* Apply function type inference using the default types of any
remaining untyped arguments.
* Apply constraint type inference again.
##### Element constraint example
For an example of where constraint type inference is useful, let's
consider a function that takes a defined type that is a slice of
numbers, and returns an instance of that same defined type in which
each number is doubled.
It's easy to write a function similar to this if we ignore the
[defined type](https://golang.org/ref/spec#Type_definitions)
requirement.
```
// Double returns a new slice that contains all the elements of s, doubled.
func Double[E constraints.Integer](s []E) []E {
r := make([]E, len(s))
for i, v := range s {
r[i] = v + v
}
return r
}
```
However, with that definition, if we call the function with a defined
slice type, the result will not be that defined type.
```
// MySlice is a slice of ints.
type MySlice []int
// The type of V1 will be []int, not MySlice.
// Here we are using function argument type inference,
// but not constraint type inference.
var V1 = Double(MySlice{1})
```
We can do what we want by introducing a new type parameter.
```
// DoubleDefined returns a new slice that contains the elements of s,
// doubled, and also has the same type as s.
func DoubleDefined[S ~[]E, E constraints.Integer](s S) S {
// Note that here we pass S to make, where above we passed []E.
r := make(S, len(s))
for i, v := range s {
r[i] = v + v
}
return r
}
```
Now if we use explicit type arguments, we can get the right type.
```
// The type of V2 will be MySlice.
var V2 = DoubleDefined[MySlice, int](MySlice{1})
```
Function argument type inference by itself is not enough to infer the
type arguments here, because the type parameter E is not used for any
input parameter.
But a combination of function argument type inference and constraint
type inference works.
```
// The type of V3 will be MySlice.
var V3 = DoubleDefined(MySlice{1})
```
First we apply function argument type inference.
We see that the type of the argument is `MySlice`.
Function argument type inference matches the type parameter `S` with
`MySlice`.
We then move on to constraint type inference.
We know one type argument, `S`.
We see that the type argument `S` has a structural type constraint.
We create a mapping of known type arguments:
```
{S -> MySlice}
```
We then unify each type parameter with a structural constraint with
the single type in that constraint's type set.
In this case the structural constraint is `~[]E` which has the
structural type `[]E`, so we unify `S` with `[]E`.
Since we already have a mapping for `S`, we then unify `[]E` with
`MySlice`.
As `MySlice` is defined as `[]int`, that associates `E` with `int`.
We now have:
```
{S -> MySlice, E -> int}
```
We then substitute `E` with `int`, which changes nothing, and we are
done.
The type arguments for this call to `DoubleDefined` are `[MySlice,
int]`.
This example shows how we can use constraint type inference to set a
type name for an element of some other type parameter.
In this case we can name the element type of `S` as `E`, and we can
then apply further constraints to `E`, in this case requiring that it
be a number.
##### Pointer method example
Consider this example of a function that expects a type `T` that has a
`Set(string)` method that initializes a value based on a string.
```
// Setter is a type constraint that requires that the type
// implement a Set method that sets the value from a string.
type Setter interface {
Set(string)
}
// FromStrings takes a slice of strings and returns a slice of T,
// calling the Set method to set each returned value.
//
// Note that because T is only used for a result parameter,
// function argument type inference does not work when calling
// this function.
func FromStrings[T Setter](s []string) []T {
result := make([]T, len(s))
for i, v := range s {
result[i].Set(v)
}
return result
}
```
Now let's see some calling code (this example is invalid).
```
// Settable is an integer type that can be set from a string.
type Settable int
// Set sets the value of *p from a string.
func (p *Settable) Set(s string) {
i, _ := strconv.Atoi(s) // real code should not ignore the error
*p = Settable(i)
}
func F() {
// INVALID
nums := FromStrings[Settable]([]string{"1", "2"})
// Here we want nums to be []Settable{1, 2}.
...
}
```
The goal is to use `FromStrings` to get a slice of type `[]Settable`.
Unfortunately, this example is not valid and will not compile.
The problem is that `FromStrings` requires a type that has a
`Set(string)` method.
The function `F` is trying to instantiate `FromStrings` with
`Settable`, but `Settable` does not have a `Set` method.
The type that has a `Set` method is `*Settable`.
So let's rewrite `F` to use `*Settable` instead.
```
func F() {
// Compiles but does not work as desired.
// This will panic at run time when calling the Set method.
nums := FromStrings[*Settable]([]string{"1", "2"})
...
}
```
This compiles but unfortunately it will panic at run time.
The problem is that `FromStrings` creates a slice of type `[]T`.
When instantiated with `*Settable`, that means a slice of type
`[]*Settable`.
When `FromStrings` calls `result[i].Set(v)`, that invokes the `Set`
method on the pointer stored in `result[i]`.
That pointer is `nil`.
The `Settable.Set` method will be invoked with a `nil` receiver, and
will raise a panic due to a `nil` dereference error.
The pointer type `*Settable` implements the constraint, but the code
really wants to use the non-pointer type`Settable`.
What we need is a way to write `FromStrings` such that it can take the
type `Settable` as an argument but invoke a pointer method.
To repeat, we can't use `Settable` because it doesn't have a `Set`
method, and we can't use `*Settable` because then we can't create a
slice of type `Settable`.
What we can do is pass both types.
```
// Setter2 is a type constraint that requires that the type
// implement a Set method that sets the value from a string,
// and also requires that the type be a pointer to its type parameter.
type Setter2[B any] interface {
Set(string)
*B // non-interface type constraint element
}
// FromStrings2 takes a slice of strings and returns a slice of T,
// calling the Set method to set each returned value.
//
// We use two different type parameters so that we can return
// a slice of type T but call methods on *T aka PT.
// The Setter2 constraint ensures that PT is a pointer to T.
func FromStrings2[T any, PT Setter2[T]](s []string) []T {
result := make([]T, len(s))
for i, v := range s {
// The type of &result[i] is *T which is in the type set
// of Setter2, so we can convert it to PT.
p := PT(&result[i])
// PT has a Set method.
p.Set(v)
}
return result
}
```
We can then call `FromStrings2` like this:
```
func F2() {
// FromStrings2 takes two type parameters.
// The second parameter must be a pointer to the first.
// Settable is as above.
nums := FromStrings2[Settable, *Settable]([]string{"1", "2"})
// Now nums is []Settable{1, 2}.
...
}
```
This approach works as expected, but it is awkward to have to repeat
`Settable` in the type arguments.
Fortunately, constraint type inference makes it less awkward.
Using constraint type inference we can write
```
func F3() {
// Here we just pass one type argument.
nums := FromStrings2[Settable]([]string{"1", "2"})
// Now nums is []Settable{1, 2}.
...
}
```
There is no way to avoid passing the type argument `Settable`.
But given that type argument, constraint type inference can infer the
type argument `*Settable` for the type parameter `PT`.
As before, we create a mapping of known type arguments:
```
{T -> Settable}
```
We then unify each type parameter with a structural constraint.
In this case, we unify `PT` with the single type of `Setter2[T]`,
which is `*T`.
The mapping is now
```
{T -> Settable, PT -> *T}
```
We then replace `T` with `Settable` throughout, giving us:
```
{T -> Settable, PT -> *Settable}
```
After this nothing changes, and we are done.
Both type arguments are known.
This example shows how we can use constraint type inference to apply a
constraint to a type that is based on some other type parameter.
In this case we are saying that `PT`, which is `*T`, must have a `Set`
method.
We can do this without requiring the caller to explicitly mention
`*T`.
##### Constraints apply even after constraint type inference
Even when constraint type inference is used to infer type arguments
based on constraints, we must still check the constraints after the
type arguments are determined.
In the `FromStrings2` example above, we were able to deduce the type
argument for `PT` based on the `Setter2` constraint.
But in doing so we only looked at the type set, we didn't look at the
methods.
We still have to verify that the method is there, satisfying the
constraint, even if constraint type inference succeeds.
For example, consider this invalid code:
```
// Unsettable is a type that does not have a Set method.
type Unsettable int
func F4() {
// This call is INVALID.
nums := FromStrings2[Unsettable]([]string{"1", "2"})
...
}
```
When this call is made, we will apply constraint type inference just
as before.
It will succeed, just as before, and infer that the type arguments are
`[Unsettable, *Unsettable]`.
Only after constraint type inference is complete will we check whether
`*Unsettable` implements the constraint `Setter2[Unsettable]`.
Since `*Unsettable` does not have a `Set` method, constraint checking
will fail, and this code will not compile.
### Using types that refer to themselves in constraints
It can be useful for a generic function to require a type argument
with a method whose argument is the type itself.
For example, this arises naturally in comparison methods.
(Note that we are talking about methods here, not operators.)
Suppose we want to write an `Index` method that uses an `Equal` method
to check whether it has found the desired value.
We would like to write that like this:
```
// Index returns the index of e in s, or -1 if not found.
func Index[T Equaler](s []T, e T) int {
for i, v := range s {
if e.Equal(v) {
return i
}
}
return -1
}
```
In order to write the `Equaler` constraint, we have to write a
constraint that can refer to the type argument being passed in.
The easiest way to do this is to take advantage of the fact that a
constraint does not have to be a defined type, it can simply be an
interface type literal.
This interface type literal can then refer to the type parameter.
```
// Index returns the index of e in s, or -1 if not found.
func Index[T interface { Equal(T) bool }](s []T, e T) int {
// same as above
}
```
This version of `Index` would be used with a type like `equalInt`
defined here:
```
// equalInt is a version of int that implements Equaler.
type equalInt int
// The Equal method lets equalInt implement the Equaler constraint.
func (a equalInt) Equal(b equalInt) bool { return a == b }
// indexEqualInts returns the index of e in s, or -1 if not found.
func indexEqualInt(s []equalInt, e equalInt) int {
// The type argument equalInt is shown here for clarity.
// Function argument type inference would permit omitting it.
return Index[equalInt](s, e)
}
```
In this example, when we pass `equalInt` to `Index`, we check whether
`equalInt` implements the constraint `interface { Equal(T) bool }`.
The constraint has a type parameter, so we replace the type parameter
with the type argument, which is `equalInt` itself.
That gives us `interface { Equal(equalInt) bool }`.
The `equalInt` type has an `Equal` method with that signature, so all
is well, and the compilation succeeds.
### Values of type parameters are not boxed
In the current implementations of Go, interface values always hold
pointers.
Putting a non-pointer value in an interface variable causes the value
to be _boxed_.
That means that the actual value is stored somewhere else, on the heap
or stack, and the interface value holds a pointer to that location.
In this design, values of generic types are not boxed.
For example, let's look back at our earlier example of
`FromStrings2`.
When it is instantiated with type `Settable`, it returns a value of
type `[]Settable`.
For example, we can write
```
// Settable is an integer type that can be set from a string.
type Settable int
// Set sets the value of *p from a string.
func (p *Settable) Set(s string) {
// same as above
}
func F() {
// The type of nums is []Settable.
nums := FromStrings2[Settable]([]string{"1", "2"})
// Settable can be converted directly to int.
// This will set first to 1.
first := int(nums[0])
...
}
```
When we call `FromStrings2` instantiated with the type `Settable` we
get back a `[]Settable`.
The elements of that slice will be `Settable` values, which is to say,
they will be integers.
They will not be boxed, even though they were created and set by a
generic function.
Similarly, when a generic type is instantiated it will have the
expected types as components.
```
type Pair[F1, F2 any] struct {
first F1
second F2
}
```
When this is instantiated, the fields will not be boxed, and no
unexpected memory allocations will occur.
The type `Pair[int, string]` is convertible to `struct { first int;
second string }`.
### More on type sets
Let's return now to type sets to cover some less important details
that are still worth noting.
#### Both elements and methods in constraints
As seen earlier for `Setter2`, a constraint may use both constraint
elements and methods.
```
// StringableSignedInteger is a type constraint that matches any
// type that is both 1) defined as a signed integer type;
// 2) has a String method.
type StringableSignedInteger interface {
~int | ~int8 | ~int16 | ~int32 | ~int64
String() string
}
```
The rules for type sets define what this means.
The type set of the union element is the set of all types whose
underlying type is one of the predeclared signed integer types.
The type set of `String() string` is the set of all types that define
that method.
The type set of `StringableSignedInteger` is the intersection of those
two type sets.
The result is the set of all types whose underlying type is one of the
predeclared signed integer types and that defines the method `String()
string`.
A function that uses a parameterized type `P` that uses
`StringableSignedInteger` as a constraint may use the operations
permitted for any integer type (`+`, `*`, and so forth) on a value of
type `P`.
It may also call the `String` method on a value of type `P` to get
back a `string`.
It's worth noting that the `~` is essential here.
The `StringableSignedInteger` constraint uses `~int`, not `int`.
The type `int` would not itself be permitted as a type argument, since
`int` does not have a `String` method.
An example of a type argument that would be permitted is `MyInt`,
defined as:
```
// MyInt is a stringable int.
type MyInt int
// The String method returns a string representation of mi.
func (mi MyInt) String() string {
return fmt.Sprintf("MyInt(%d)", mi)
}
```
#### Composite types in constraints
As we've seen in some earlier examples, a constraint element may be a
type literal.
```
type byteseq interface {
string | []byte
}
```
The usual rules apply: the type argument for this constraint may be
`string` or `[]byte`; a generic function with this constraint may use
any operation permitted by both `string` and `[]byte`.
The `byteseq` constraint permits writing generic functions that work
for either `string` or `[]byte` types.
```
// Join concatenates the elements of its first argument to create a
// single value. sep is placed between elements in the result.
// Join works for string and []byte types.
func Join[T byteseq](a []T, sep T) (ret T) {
if len(a) == 0 {
// Use the result parameter as a zero value;
// see discussion of zero value in the Issues section.
return ret
}
if len(a) == 1 {
// We know that a[0] is either a string or a []byte.
// We can append either a string or a []byte to a []byte,
// producing a []byte. We can convert that []byte to
// either a []byte (a no-op conversion) or a string.
return T(append([]byte(nil), a[0]...))
}
// We can call len on sep because we can call len
// on both string and []byte.
n := len(sep) * (len(a) - 1)
for _, v := range a {
// Another case where we call len on string or []byte.
n += len(v)
}
b := make([]byte, n)
// We can call copy to a []byte with an argument of
// either string or []byte.
bp := copy(b, a[0])
for _, s := range a[1:] {
bp += copy(b[bp:], sep)
bp += copy(b[bp:], s)
}
// As above, we can convert b to either []byte or string.
return T(b)
}
```
For composite types (string, pointer, array, slice, struct, function,
map, channel) we impose an additional restriction: an operation may
only be used if the operator accepts identical input types (if any)
and produces identical result types for all of the types in the type
set.
To be clear, this additional restriction is only imposed when a
composite type appears in a type set.
It does not apply when a composite type is formed from a type
parameter outside of a type set, as in `var v []T` for some type
parameter `T`.
```
// structField is a type constraint whose type set consists of some
// struct types that all have a field named x.
type structField interface {
struct { a int; x int } |
struct { b int; x float64 } |
struct { c int; x uint64 }
}
// This function is INVALID.
func IncrementX[T structField](p *T) {
v := p.x // INVALID: type of p.x is not the same for all types in set
v++
p.x = v
}
// sliceOrMap is a type constraint for a slice or a map.
type sliceOrMap interface {
[]int | map[int]int
}
// Entry returns the i'th entry in a slice or the value of a map
// at key i. This is valid as the result of the operator is always int.
func Entry[T sliceOrMap](c T, i int) int {
// This is either a slice index operation or a map key lookup.
// Either way, the index and result types are type int.
return c[i]
}
// sliceOrFloatMap is a type constraint for a slice or a map.
type sliceOrFloatMap interface {
[]int | map[float64]int
}
// This function is INVALID.
// In this example the input type of the index operation is either
// int (for a slice) or float64 (for a map), so the operation is
// not permitted.
func FloatEntry[T sliceOrFloatMap](c T) int {
return c[1.0] // INVALID: input type is either int or float64.
}
```
Imposing this restriction makes it easier to reason about the type of
some operation in a generic function.
It avoids introducing the notion of a value with a constructed type
set based on applying some operation to each element of a type set.
(Note: with more understanding of how people want to write code, it
may be possible to relax this restriction in the future.)
#### Type parameters in type sets
A type literal in a constraint element can refer to type parameters of
the constraint.
In this example, the generic function `Map` takes two type parameters.
The first type parameter is required to have an underlying type that
is a slice of the second type parameter.
There are no constraints on the second type parameter.
```
// SliceConstraint is a type constraint that matches a slice of
// the type parameter.
type SliceConstraint[T any] interface {
~[]T
}
// Map takes a slice of some element type and a transformation function,
// and returns a slice of the function applied to each element.
// Map returns a slice that is the same type as its slice argument,
// even if that is a defined type.
func Map[S SliceConstraint[E], E any](s S, f func(E) E) S {
r := make(S, len(s))
for i, v := range s {
r[i] = f(v)
}
return r
}
// MySlice is a simple defined type.
type MySlice []int
// DoubleMySlice takes a value of type MySlice and returns a new
// MySlice value with each element doubled in value.
func DoubleMySlice(s MySlice) MySlice {
// The type arguments listed explicitly here could be inferred.
v := Map[MySlice, int](s, func(e int) int { return 2 * e })
// Here v has type MySlice, not type []int.
return v
}
```
We showed other examples of this earlier in the discussion of
[constraint type inference](#Constraint-type-inference).
#### Type conversions
In a function with two type parameters `From` and `To`, a value of
type `From` may be converted to a value of type `To` if all the types
in the type set of `From`'s constraint can be converted to all the
types in the type set of `To`'s constraint.
This is a consequence of the general rule that a generic function may
use any operation that is permitted by all types listed in the type
set.
For example:
```
type integer interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}
func Convert[To, From integer](from From) To {
to := To(from)
if From(to) != from {
panic("conversion out of range")
}
return to
}
```
The type conversions in `Convert` are permitted because Go permits
every integer type to be converted to every other integer type.
#### Untyped constants
Some functions use untyped constants.
An untyped constant is permitted with a value of a type parameter if
it is permitted with every type in the type set of the type
parameter's constraint.
As with type conversions, this is a consequence of the general rule
that a generic function may use any operation that is permitted by all
types in the type set.
```
type integer interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}
func Add10[T integer](s []T) {
for i, v := range s {
s[i] = v + 10 // OK: 10 can convert to any integer type
}
}
// This function is INVALID.
func Add1024[T integer](s []T) {
for i, v := range s {
s[i] = v + 1024 // INVALID: 1024 not permitted by int8/uint8
}
}
```
#### Type sets of embedded constraints
When a constraint embeds another constraint, the type set of the
outer constraint is the intersection of all the type sets involved.
If there are multiple embedded types, intersection preserves the
property that any type argument must satisfy the requirements of all
constraint elements.
```
// Addable is types that support the + operator.
type Addable interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr |
~float32 | ~float64 | ~complex64 | ~complex128 |
~string
}
// Byteseq is a byte sequence: either string or []byte.
type Byteseq interface {
~string | ~[]byte
}
// AddableByteseq is a byte sequence that supports +.
// This is every type that is both Addable and Byteseq.
// In other words, just the type set ~string.
type AddableByteseq interface {
Addable
Byteseq
}
```
An embedded constraint may appear in a union element.
The type set of the union is, as usual, the union of the type sets of
the elements listed in the union.
```
// Signed is a constraint with a type set of all signed integer
// types.
type Signed interface {
~int | ~int8 | ~int16 | ~int32 | ~int64
}
// Unsigned is a constraint with a type set of all unsigned integer
// types.
type Unsigned interface {
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}
// Integer is a constraint with a type set of all integer types.
type Integer interface {
Signed | Unsigned
}
```
#### Interface types in union elements
We've said that the type set of a union element is the union of the
type sets of all types in the union.
For most types `T` the type set of `T` is simply `T` itself.
For interface types (and approximation elements), however, that is not
the case.
The type set of an interface type that does not embed a non-interface
element is, as we said earlier, the set of all types that declare all
the methods of the interface, including the interface type itself.
Using such an interface type in a union element will add that type set
to the union.
```
type Stringish interface {
string | fmt.Stringer
}
```
The type set of `Stringish` is the type `string` and all types that
implement `fmt.Stringer`.
Any of those types (including `fmt.Stringer` itself) will be permitted
as a type argument for this constraint.
No operations will be permitted for a value of a type parameter that
uses `Stringish` as a constraint (other than operations supported by
all types).
This is because `fmt.Stringer` is in the type set of `Stringish`, and
`fmt.Stringer`, an interface type, does not support any type-specific
operations.
The operations permitted by `Stringish` are those operations supported
by all the types in the type set, including `fmt.Stringer`, so in this
case there are no operations other than those supported by all types.
A parameterized function that uses this constraint will have to use
type assertions or reflection in order to use the values.
Still, this may be useful in some cases for stronger static type
checking.
The main point is that it follows directly from the definition of type
sets and constraint satisfaction.
#### Empty type sets
It is possible to write a constraint with an empty type set.
There is no type argument that will satisfy such a constraint,
so any attempt to instantiate a function that uses constraint with an
empty type set will fail.
It is not possible in general for the compiler to detect all such
cases.
Probably the vet tool should give an error for cases that it is able
to detect.
```
// Unsatisfiable is an unsatisfiable constraint with an empty type set.
// No predeclared types have any methods.
// If this used ~int | ~float32 the type set would not be empty.
type Unsatisfiable interface {
int | float32
String() string
}
```
#### General notes on type sets
It may seem awkward to explicitly list types in a constraint, but it
is clear both as to which type arguments are permitted at the call
site, and which operations are permitted by the generic function.
If the language later changes to support operator methods (there are
no such plans at present), then constraints will handle them as they
do any other kind of method.
There will always be a limited number of predeclared types, and a
limited number of operators that those types support.
Future language changes will not fundamentally change those facts, so
this approach will continue to be useful.
This approach does not attempt to handle every possible operator.
The expectation is that composite types will normally be handled using
composite types in generic function and type declarations, rather than
putting composite types in a type set.
For example, we expect functions that want to index into a slice to be
parameterized on the slice element type `T`, and to use parameters or
variables of type `[]T`.
As shown in the `DoubleMySlice` example above, this approach makes it
awkward to declare generic functions that accept and return a
composite type and want to return the same result type as their
argument type.
Defined composite types are not common, but they do arise.
This awkwardness is a weakness of this approach.
Constraint type inference can help at the call site.
### Reflection
We do not propose to change the reflect package in any way.
When a type or function is instantiated, all of the type parameters
will become ordinary non-generic types.
The `String` method of a `reflect.Type` value of an instantiated type
will return the name with the type arguments in square brackets.
For example, `List[int]`.
It's impossible for non-generic code to refer to generic code without
instantiating it, so there is no reflection information for
uninstantiated generic types or functions.
### Implementation
Russ Cox [famously observed](https://research.swtch.com/generic) that
generics require choosing among slow programmers, slow compilers, or
slow execution times.
We believe that this design permits different implementation choices.
Code may be compiled separately for each set of type arguments, or it
may be compiled as though each type argument is handled similarly to
an interface type with method calls, or there may be some combination
of the two.
In other words, this design permits people to stop choosing slow
programmers, and permits the implementation to decide between slow
compilers (compile each set of type arguments separately) or slow
execution times (use method calls for each operation on a value of a
type argument).
### Summary
While this document is long and detailed, the actual design reduces to
a few major points.
* Functions and types can have type parameters, which are defined
using constraints, which are interface types.
* Constraints describe the methods required and the types permitted
for a type argument.
* Constraints describe the methods and operations permitted for a type
parameter.
* Type inference will often permit omitting type arguments when
calling functions with type parameters.
This design is completely backward compatible.
We believe that this design addresses people's needs for generic
programming in Go, without making the language any more complex than
necessary.
We can't truly know the impact on the language without years of
experience with this design.
That said, here are some speculations.
#### Complexity
One of the great aspects of Go is its simplicity.
Clearly this design makes the language more complex.
We believe that the increased complexity is small for people reading
well written generic code, rather than writing it.
Naturally people must learn the new syntax for declaring type
parameters.
This new syntax, and the new support for type sets in interfaces, are
the only new syntactic constructs in this design.
The code within a generic function reads like ordinary Go code, as can
be seen in the examples below.
It is an easy shift to go from `[]int` to `[]T`.
Type parameter constraints serve effectively as documentation,
describing the type.
We expect that most packages will not define generic types or
functions, but many packages are likely to use generic types or
functions defined elsewhere.
In the common case, generic functions work exactly like non-generic
functions: you simply call them.
Type inference means that you do not have to write out the type
arguments explicitly.
The type inference rules are designed to be unsurprising: either the
type arguments are deduced correctly, or the call fails and requires
explicit type parameters.
Type inference uses type equivalence, with no attempt to resolve two
types that are similar but not equivalent, which removes significant
complexity.
Packages using generic types will have to pass explicit type
arguments.
The syntax for this is straightforward.
The only change is passing arguments to types rather than only to
functions.
In general, we have tried to avoid surprises in the design.
Only time will tell whether we succeeded.
#### Pervasiveness
We expect that a few new packages will be added to the standard
library.
A new `slices` packages will be similar to the existing bytes and
strings packages, operating on slices of any element type.
New `maps` and `chans` packages will provide algorithms that are
currently duplicated for each element type.
A `sets` package may be added.
A new `constraints` package will provide standard constraints, such as
constraints that permit all integer types or all numeric types.
Packages like `container/list` and `container/ring`, and types like
`sync.Map` and `sync/atomic.Value`, will be updated to be compile-time
type-safe, either using new names or new versions of the packages.
The `math` package will be extended to provide a set of simple
standard algorithms for all numeric types, such as the ever popular
`Min` and `Max` functions.
We may add generic variants to the `sort` package.
It is likely that new special purpose compile-time type-safe container
types will be developed.
We do not expect approaches like the C++ STL iterator types to become
widely used.
In Go that sort of idea is more naturally expressed using an interface
type.
In C++ terms, using an interface type for an iterator can be seen as
carrying an abstraction penalty, in that run-time efficiency will be
less than C++ approaches that in effect inline all code; we believe
that Go programmers will continue to find that sort of penalty to be
acceptable.
As we get more container types, we may develop a standard `Iterator`
interface.
That may in turn lead to pressure to modify the language to add some
mechanism for using an `Iterator` with the `range` clause.
That is very speculative, though.
#### Efficiency
It is not clear what sort of efficiency people expect from generic
code.
Generic functions, rather than generic types, can probably be compiled
using an interface-based approach.
That will optimize compile time, in that the function is only compiled
once, but there will be some run time cost.
Generic types may most naturally be compiled multiple times for each
set of type arguments.
This will clearly carry a compile time cost, but there shouldn't be
any run time cost.
Compilers can also choose to implement generic types similarly to
interface types, using special purpose methods to access each element
that depends on a type parameter.
Only experience will show what people expect in this area.
#### Omissions
We believe that this design covers the basic requirements for
generic programming.
However, there are a number of programming constructs that are not
supported.
* No specialization.
There is no way to write multiple versions of a generic function
that are designed to work with specific type arguments.
* No metaprogramming.
There is no way to write code that is executed at compile time to
generate code to be executed at run time.
* No higher level abstraction.
There is no way to use a function with type arguments other than to
call it or instantiate it.
There is no way to use a generic type other than to instantiate it.
* No general type description.
In order to use operators in a generic function, constraints list
specific types, rather than describing the characteristics that a
type must have.
This is easy to understand but may be limiting at times.
* No covariance or contravariance of function parameters.
* No operator methods.
You can write a generic container that is compile-time type-safe,
but you can only access it with ordinary methods, not with syntax
like `c[k]`.
* No currying.
There is no way to partially instantiate a generic function or type,
other than by using a helper function or a wrapper type.
All type arguments must be either explicitly passed or inferred at
instantiation time.
* No variadic type parameters.
There is no support for variadic type parameters, which would permit
writing a single generic function that takes different numbers of
both type parameters and regular parameters.
* No adaptors.
There is no way for a constraint to define adaptors that could be
used to support type arguments that do not already implement the
constraint, such as, for example, defining an `==` operator in terms
of an `Equal` method, or vice-versa.
* No parameterization on non-type values such as constants.
This arises most obviously for arrays, where it might sometimes be
convenient to write `type Matrix[n int] [n][n]float64`.
It might also sometimes be useful to specify significant values for
a container type, such as a default value for elements.
#### Issues
There are some issues with this design that deserve a more detailed
discussion.
We think these issues are relatively minor compared to the design as a
whole, but they still deserve a complete hearing and discussion.
##### The zero value
This design has no simple expression for the zero value of a type
parameter.
For example, consider this implementation of optional values that uses
pointers:
```
type Optional[T any] struct {
p *T
}
func (o Optional[T]) Val() T {
if o.p != nil {
return *o.p
}
var zero T
return zero
}
```
In the case where `o.p == nil`, we want to return the zero value of
`T`, but we have no way to write that.
It would be nice to be able to write `return nil`, but that wouldn't
work if `T` is, say, `int`; in that case we would have to write
`return 0`.
And, of course, there is no way to write a constraint to support
either `return nil` or `return 0`.
Some approaches to this are:
* Use `var zero T`, as above, which works with the existing design
but requires an extra statement.
* Use `*new(T)`, which is cryptic but works with the existing
design.
* For results only, name the result parameter, and use a naked
`return` statement to return the zero value.
* Extend the design to permit using `nil` as the zero value of any
generic type (but see [issue 22729](https://golang.org/issue/22729)).
* Extend the design to permit using `T{}`, where `T` is a type
parameter, to indicate the zero value of the type.
* Change the language to permit using `_` on the right hand of an
assignment (including `return` or a function call) as proposed in
[issue 19642](https://golang.org/issue/19642).
* Change the language to permit `return ...` to return zero values of
the result types, as proposed in
[issue 21182](https://golang.org/issue/21182).
We feel that more experience with this design is needed before
deciding what, if anything, to do here.
##### Identifying the matched predeclared type
The design doesn't provide any way to test the underlying type matched
by a `~T` constraint element.
Code can test the actual type argument through the somewhat awkward
approach of converting to an empty interface type and using a type
assertion or a type switch.
But that lets code test the actual type argument, which is not the
same as the underlying type.
Here is an example that shows the difference.
```
type Float interface {
~float32 | ~float64
}
func NewtonSqrt[T Float](v T) T {
var iterations int
switch (interface{})(v).(type) {
case float32:
iterations = 4
case float64:
iterations = 5
default:
panic(fmt.Sprintf("unexpected type %T", v))
}
// Code omitted.
}
type MyFloat float32
var G = NewtonSqrt(MyFloat(64))
```
This code will panic when initializing `G`, because the type of `v` in
the `NewtonSqrt` function will be `MyFloat`, not `float32` or
`float64`.
What this function actually wants to test is not the type of `v`, but
the approximate type that `v` matched in the constraint's type set.
One way to handle this would be to permit writing approximate types in
a type switch, as in `case ~float32:`.
Such a case would match any type whose underlying type is `float32`.
This would be meaningful, and potentially useful, even in type
switches outside of generic functions.
##### No way to express convertibility
The design has no way to express convertibility between two different
type parameters.
For example, there is no way to write this function:
```
// Copy copies values from src to dst, converting them as they go.
// It returns the number of items copied, which is the minimum of
// the lengths of dst and src.
// This implementation is INVALID.
func Copy[T1, T2 any](dst []T1, src []T2) int {
for i, x := range src {
if i > len(dst) {
return i
}
dst[i] = T1(x) // INVALID
}
return len(src)
}
```
The conversion from type `T2` to type `T1` is invalid, as there is no
constraint on either type that permits the conversion.
Worse, there is no way to write such a constraint in general.
In the particular case where both `T1` and `T2` have limited type sets
this function can be written as described earlier when discussing
[type conversions using type sets](#Type-conversions).
But, for example, there is no way to write a constraint for the case
in which `T1` is an interface type and `T2` is a type that implements
that interface.
It's worth noting that if `T1` is an interface type then this can be
written using a conversion to the empty interface type and a type
assertion, but this is, of course, not compile-time type-safe.
```
// Copy copies values from src to dst, converting them as they go.
// It returns the number of items copied, which is the minimum of
// the lengths of dst and src.
func Copy[T1, T2 any](dst []T1, src []T2) int {
for i, x := range src {
if i > len(dst) {
return i
}
dst[i] = (interface{})(x).(T1)
}
return len(src)
}
```
##### No parameterized methods
This design does not permit methods to declare type parameters that
are specific to the method.
The receiver may have type parameters, but the method may not add any
type parameters.
In Go, one of the main roles of methods is to permit types to
implement interfaces.
It is not clear whether it is reasonably possible to permit
parameterized methods to implement interfaces.
For example, consider this code, which uses the obvious syntax for
parameterized methods.
This code uses multiple packages to make the problem clearer.
```
package p1
// S is a type with a parameterized method Identity.
type S struct{}
// Identity is a simple identity method that works for any type.
func (S) Identity[T any](v T) T { return v }
package p2
// HasIdentity is an interface that matches any type with a
// parameterized Identity method.
type HasIdentity interface {
Identity[T any](T) T
}
package p3
import "p2"
// CheckIdentity checks the Identity method if it exists.
// Note that although this function calls a parameterized method,
// this function is not itself parameterized.
func CheckIdentity(v interface{}) {
if vi, ok := v.(p2.HasIdentity); ok {
if got := vi.Identity[int](0); got != 0 {
panic(got)
}
}
}
package p4
import (
"p1"
"p3"
)
// CheckSIdentity passes an S value to CheckIdentity.
func CheckSIdentity() {
p3.CheckIdentity(p1.S{})
}
```
In this example, we have a type `p1.S` with a parameterized method and
a type `p2.HasIdentity` that also has a parameterized method.
`p1.S` implements `p2.HasIdentity`.
Therefore, the function `p3.CheckIdentity` can call `vi.Identity` with
an `int` argument, which in the call from `p4.CheckSIdentity` will be
a call to `p1.S.Identity[int]`.
But package p3 does not know anything about the type `p1.S`.
There may be no other call to `p1.S.Identity` elsewhere in the
program.
We need to instantiate `p1.S.Identity[int]` somewhere, but how?
We could instantiate it at link time, but in the general case that
requires the linker to traverse the complete call graph of the program
to determine the set of types that might be passed to `CheckIdentity`.
And even that traversal is not sufficient in the general case when
type reflection gets involved, as reflection might look up methods
based on strings input by the user.
So in general instantiating parameterized methods in the linker might
require instantiating every parameterized method for every possible
type argument, which seems untenable.
Or, we could instantiate it at run time.
In general this means using some sort of JIT, or compiling the code to
use some sort of reflection based approach.
Either approach would be very complex to implement, and would be
surprisingly slow at run time.
Or, we could decide that parameterized methods do not, in fact,
implement interfaces, but then it's much less clear why we need
methods at all.
If we disregard interfaces, any parameterized method can be
implemented as a parameterized function.
So while parameterized methods seem clearly useful at first glance, we
would have to decide what they mean and how to implement that.
##### No way to require pointer methods
In some cases a parameterized function is naturally written such that
it always invokes methods on addressable values.
For example, this happens when calling a method on each element of a
slice.
In such a case, the function only requires that the method be in the
slice element type's pointer method set.
The type constraints described in this design have no way to write
that requirement.
For example, consider a variant of the `Stringify` example we [showed
earlier](#Using-a-constraint).
```
// Stringify2 calls the String method on each element of s,
// and returns the results.
func Stringify2[T Stringer](s []T) (ret []string) {
for i := range s {
ret = append(ret, s[i].String())
}
return ret
}
```
Suppose we have a `[]bytes.Buffer` and we want to convert it into a
`[]string`.
The `Stringify2` function here won't help us.
We want to write `Stringify2[bytes.Buffer]`, but we can't, because
`bytes.Buffer` doesn't have a `String` method.
The type that has a `String` method is `*bytes.Buffer`.
Writing `Stringify2[*bytes.Buffer]` doesn't help because that function
expects a `[]*bytes.Buffer`, but we have a `[]bytes.Buffer`.
We discussed a similar case in the [pointer method
example](#Pointer-method-example) above.
There we used constraint type inference to help simplify the problem.
Here that doesn't help, because `Stringify2` doesn't really care about
calling a pointer method.
It just wants a type that has a `String` method, and it's OK if the
method is only in the pointer method set, not the value method set.
But we also want to accept the case where the method is in the value
method set, for example if we really do have a `[]*bytes.Buffer`.
What we need is a way to say that the type constraint applies to
either the pointer method set or the value method set.
The body of the function would be required to only call the method on
addressable values of the type.
It's not clear how often this problem comes up in practice.
##### No association between float and complex
Constraint type inference lets us give a name to the element of a
slice type, and to apply other similar type decompositions.
However, there is no way to associate a float type and a complex type.
For example, there is no way to write the predeclared `real`, `imag`,
or `complex` functions with this design.
There is no way to say "if the argument type is `complex64`, then the
result type is `float32`."
One possible approach here would be to permit `real(T)` as a type
constraint meaning "the float type associated with the complex type
`T`".
Similarly, `complex(T)` would mean "the complex type associated with
the floating point type `T`".
Constraint type inference would simplify the call site.
However, that would be unlike other type constraints.
#### Discarded ideas
This design is not perfect, and there may be ways to improve it.
That said, there are many ideas that we've already considered in
detail.
This section lists some of those ideas in the hopes that it will help
to reduce repetitive discussion.
The ideas are presented in the form of a FAQ.
##### What happened to contracts?
An earlier draft design of generics implemented constraints using a
new language construct called contracts.
Type sets appeared only in contracts, rather than in interface types.
However, many people had a hard time understanding the difference
between contracts and interface types.
It also turned out that contracts could be represented as a set of
corresponding interfaces; there was no loss in expressive power
without contracts.
We decided to simplify the approach to use only interface types.
##### Why not use methods instead of type sets?
_Type sets are weird._
_Why not write methods for all operators?_
It is possible to permit operator tokens as method names, leading to
methods such as `+(T) T`.
Unfortunately, that is not sufficient.
We would need some mechanism to describe a type that matches any
integer type, for operations such as shifts `<<(integer) T` and
indexing `[](integer) T` which are not restricted to a single int
type.
We would need an untyped boolean type for operations such as `==(T)
untyped bool`.
We would need to introduce new notation for operations such as
conversions, or to express that one may range over a type, which would
likely require some new syntax.
We would need some mechanism to describe valid values of untyped
constants.
We would have to consider whether support for `<(T) bool` means that a
generic function can also use `<=`, and similarly whether support for
`+(T) T` means that a function can also use `++`.
It might be possible to make this approach work but it's not
straightforward.
The approach used in this design seems simpler and relies on only one
new syntactic construct (type sets) and one new name (`comparable`).
##### Why not put type parameters on packages?
We investigated this extensively.
It becomes problematic when you want to write a `list` package, and
you want that package to include a `Transform` function that converts
a `List` of one element type to a `List` of another element type.
It's very awkward for a function in one instantiation of a package to
return a type that requires a different instantiation of the same
package.
It also confuses package boundaries with type definitions.
There is no particular reason to think that the uses of generic types
will break down neatly into packages.
Sometimes they will, sometimes they won't.
##### Why not use the syntax `F<T>` like C++ and Java?
When parsing code within a function, such as `v := F<T>`, at the point
of seeing the `<` it's ambiguous whether we are seeing a type
instantiation or an expression using the `<` operator.
This is very difficult to resolve without type information.
For example, consider a statement like
```
a, b = w < x, y > (z)
```
Without type information, it is impossible to decide whether the right
hand side of the assignment is a pair of expressions (`w < x` and `y >
(z)`), or whether it is a generic function instantiation and call that
returns two result values (`(w<x, y>)(z)`).
It is a key design decision of Go that parsing be possible without
type information, which seems impossible when using angle brackets for
generics.
##### Why not use the syntax `F(T)`?
An earlier version of this design used that syntax.
It was workable but it introduced several parsing ambiguities.
For example, when writing `var f func(x(T))` it wasn't clear whether
the type was a function with a single unnamed parameter of the
instantiated type `x(T)` or whether it was a function with a parameter
named `x` with type `(T)` (more usually written as `func(x T)`, but in
this case with a parenthesized type).
There were other ambiguities as well.
For `[]T(v1)` and `[]T(v2){}`, at the point of the open parentheses we
don't know whether this is a type conversion (of the value `v1` to the
type `[]T`) or a type literal (whose type is the instantiated type
`T(v2)`).
For `interface { M(T) }` we don't know whether this an interface with
a method `M` or an interface with an embedded instantiated interface
`M(T)`.
These ambiguities are solvable, by adding more parentheses, but
awkward.
Also some people were troubled by the number of parenthesized lists
involved in declarations like `func F(T any)(v T)(r1, r2 T)` or in
calls like `F(int)(1)`.
##### Why not use `F«T»`?
We considered it but we couldn't bring ourselves to require
non-ASCII.
##### Why not define constraints in a builtin package?
_Instead of writing out type sets, use names like_
_`constraints.Arithmetic` and `constraints.Comparable`._
Listing all the possible combinations of types gets rather lengthy.
It also introduces a new set of names that not only the writer of
generic code, but, more importantly, the reader, must remember.
One of the driving goals of this design is to introduce as few new
names as possible.
In this design we introduce only two new predeclared names,
`comparable` and `any`.
We expect that if people find such names useful, we can introduce a
package `constraints` that defines those names in the form of
constraints that can be used by other types and functions and embedded
in other constraints.
That will define the most useful names in the standard library while
giving programmers the flexibility to use other combinations of types
where appropriate.
##### Why not permit type assertions on values whose type is a type parameter?
In an earlier version of this design, we permitted using type
assertions and type switches on variables whose type was a type
parameter, or whose type was based on a type parameter.
We removed this facility because it is always possible to convert a
value of any type to the empty interface type, and then use a type
assertion or type switch on that.
Also, it was sometimes confusing that in a constraint with a type
set that uses approximation elements, a type assertion or type switch
would use the actual type argument, not the underlying type of the
type argument (the difference is explained in the section on
[identifying the matched predeclared type](#Identifying-the-matched-predeclared-type)).
#### Comparison with Java
Most complaints about Java generics center around type erasure.
This design does not have type erasure.
The reflection information for a generic type will include the full
compile-time type information.
In Java type wildcards (`List<? extends Number>`, `List<? super
Number>`) implement covariance and contravariance.
These concepts are missing from Go, which makes generic types much
simpler.
#### Comparison with C++
C++ templates do not enforce any constraints on the type arguments
(unless the concept proposal is adopted).
This means that changing template code can accidentally break far-off
instantiations.
It also means that error messages are reported only at instantiation
time, and can be deeply nested and difficult to understand.
This design avoids these problems through mandatory and explicit
constraints.
C++ supports template metaprogramming, which can be thought of as
ordinary programming done at compile time using a syntax that is
completely different than that of non-template C++.
This design has no similar feature.
This saves considerable complexity while losing some power and run
time efficiency.
C++ uses two-phase name lookup, in which some names are looked up in
the context of the template definition, and some names are looked up
in the context of the template instantiation.
In this design all names are looked up at the point where they are
written.
In practice, all C++ compilers compile each template at the point
where it is instantiated.
This can slow down compilation time.
This design offers flexibility as to how to handle the compilation of
generic functions.
#### Comparison with Rust
The generics described in this design are similar to generics in
Rust.
One difference is that in Rust the association between a trait bound
and a type must be defined explicitly, either in the crate that
defines the trait bound or the crate that defines the type.
In Go terms, this would mean that we would have to declare somewhere
whether a type satisfied a constraint.
Just as Go types can satisfy Go interfaces without an explicit
declaration, in this design Go type arguments can satisfy a constraint
without an explicit declaration.
Where this design uses type sets, the Rust standard library defines
standard traits for operations like comparison.
These standard traits are automatically implemented by Rust's
primitive types, and can be implemented by user defined types as
well.
Rust provides a fairly extensive list of traits, at least 34, covering
all of the operators.
Rust supports type parameters on methods, which this design does not.
## Examples
The following sections are examples of how this design could be used.
This is intended to address specific areas where people have created
user experience reports concerned with Go's lack of generics.
### Map/Reduce/Filter
Here is an example of how to write map, reduce, and filter functions
for slices.
These functions are intended to correspond to the similar functions in
Lisp, Python, Java, and so forth.
```
// Package slices implements various slice algorithms.
package slices
// Map turns a []T1 to a []T2 using a mapping function.
// This function has two type parameters, T1 and T2.
// This works with slices of any type.
func Map[T1, T2 any](s []T1, f func(T1) T2) []T2 {
r := make([]T2, len(s))
for i, v := range s {
r[i] = f(v)
}
return r
}
// Reduce reduces a []T1 to a single value using a reduction function.
func Reduce[T1, T2 any](s []T1, initializer T2, f func(T2, T1) T2) T2 {
r := initializer
for _, v := range s {
r = f(r, v)
}
return r
}
// Filter filters values from a slice using a filter function.
// It returns a new slice with only the elements of s
// for which f returned true.
func Filter[T any](s []T, f func(T) bool) []T {
var r []T
for _, v := range s {
if f(v) {
r = append(r, v)
}
}
return r
}
```
Here are some example calls of these functions.
Type inference is used to determine the type arguments based on the
types of the non-type arguments.
```
s := []int{1, 2, 3}
floats := slices.Map(s, func(i int) float64 { return float64(i) })
// Now floats is []float64{1.0, 2.0, 3.0}.
sum := slices.Reduce(s, 0, func(i, j int) int { return i + j })
// Now sum is 6.
evens := slices.Filter(s, func(i int) bool { return i%2 == 0 })
// Now evens is []int{2}.
```
### Map keys
Here is how to get a slice of the keys of any map.
```
// Package maps provides general functions that work for all map types.
package maps
// Keys returns the keys of the map m in a slice.
// The keys will be returned in an unpredictable order.
// This function has two type parameters, K and V.
// Map keys must be comparable, so key has the predeclared
// constraint comparable. Map values can be any type.
func Keys[K comparable, V any](m map[K]V) []K {
r := make([]K, 0, len(m))
for k := range m {
r = append(r, k)
}
return r
}
```
In typical use the map key and val types will be inferred.
```
k := maps.Keys(map[int]int{1:2, 2:4})
// Now k is either []int{1, 2} or []int{2, 1}.
```
### Sets
Many people have asked for Go's builtin map type to be extended, or
rather reduced, to support a set type.
Here is a type-safe implementation of a set type, albeit one that uses
methods rather than operators like `[]`.
```
// Package sets implements sets of any comparable type.
package sets
// Set is a set of values.
type Set[T comparable] map[T]struct{}
// Make returns a set of some element type.
func Make[T comparable]() Set[T] {
return make(Set[T])
}
// Add adds v to the set s.
// If v is already in s this has no effect.
func (s Set[T]) Add(v T) {
s[v] = struct{}{}
}
// Delete removes v from the set s.
// If v is not in s this has no effect.
func (s Set[T]) Delete(v T) {
delete(s, v)
}
// Contains reports whether v is in s.
func (s Set[T]) Contains(v T) bool {
_, ok := s[v]
return ok
}
// Len reports the number of elements in s.
func (s Set[T]) Len() int {
return len(s)
}
// Iterate invokes f on each element of s.
// It's OK for f to call the Delete method.
func (s Set[T]) Iterate(f func(T)) {
for v := range s {
f(v)
}
}
```
Example use:
```
// Create a set of ints.
// We pass int as a type argument.
// Then we write () because Make does not take any non-type arguments.
// We have to pass an explicit type argument to Make.
// Function argument type inference doesn't work because the
// type argument to Make is only used for a result parameter type.
s := sets.Make[int]()
// Add the value 1 to the set s.
s.Add(1)
// Check that s does not contain the value 2.
if s.Contains(2) { panic("unexpected 2") }
```
This example shows how to use this design to provide a compile-time
type-safe wrapper around an existing API.
### Sort
Before the introduction of `sort.Slice`, a common complaint was the
need for boilerplate definitions in order to use `sort.Sort`.
With this design, we can add to the sort package as follows:
```
// Ordered is a type constraint that matches all ordered types.
// (An ordered type is one that supports the < <= >= > operators.)
// In practice this type constraint would likely be defined in
// a standard library package.
type Ordered interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr |
~float32 | ~float64 |
~string
}
// orderedSlice is an internal type that implements sort.Interface.
// The Less method uses the < operator. The Ordered type constraint
// ensures that T has a < operator.
type orderedSlice[T Ordered] []T
func (s orderedSlice[T]) Len() int { return len(s) }
func (s orderedSlice[T]) Less(i, j int) bool { return s[i] < s[j] }
func (s orderedSlice[T]) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// OrderedSlice sorts the slice s in ascending order.
// The elements of s must be ordered using the < operator.
func OrderedSlice[T Ordered](s []T) {
// Convert s to the type orderedSlice[T].
// As s is []T, and orderedSlice[T] is defined as []T,
// this conversion is permitted.
// orderedSlice[T] implements sort.Interface,
// so can pass the result to sort.Sort.
// The elements will be sorted using the < operator.
sort.Sort(orderedSlice[T](s))
}
```
Now we can write:
```
s1 := []int32{3, 5, 2}
sort.OrderedSlice(s1)
// Now s1 is []int32{2, 3, 5}
s2 := []string{"a", "c", "b"})
sort.OrderedSlice(s2)
// Now s2 is []string{"a", "b", "c"}
```
Along the same lines, we can add a function for sorting using a
comparison function, similar to `sort.Slice` but writing the function
to take values rather than slice indexes.
```
// sliceFn is an internal type that implements sort.Interface.
// The Less method calls the cmp field.
type sliceFn[T any] struct {
s []T
cmp func(T, T) bool
}
func (s sliceFn[T]) Len() int { return len(s.s) }
func (s sliceFn[T]) Less(i, j int) bool { return s.cmp(s.s[i], s.s[j]) }
func (s sliceFn[T]) Swap(i, j int) { s.s[i], s.s[j] = s.s[j], s.s[i] }
// SliceFn sorts the slice s according to the function cmp.
func SliceFn[T any](s []T, cmp func(T, T) bool) {
sort.Sort(sliceFn[T]{s, cmp})
}
```
An example of calling this might be:
```
var s []*Person
// ...
sort.SliceFn(s, func(p1, p2 *Person) bool { return p1.Name < p2.Name })
```
### Channels
Many simple general purpose channel functions are never written,
because they must be written using reflection and the caller must type
assert the results.
With this design they become straightforward to write.
```
// Package chans implements various channel algorithms.
package chans
import "runtime"
// Drain drains any elements remaining on the channel.
func Drain[T any](c <-chan T) {
for range c {
}
}
// Merge merges two channels of some element type into a single channel.
func Merge[T any](c1, c2 <-chan T) <-chan T {
r := make(chan T)
go func(c1, c2 <-chan T, r chan<- T) {
defer close(r)
for c1 != nil || c2 != nil {
select {
case v1, ok := <-c1:
if ok {
r <- v1
} else {
c1 = nil
}
case v2, ok := <-c2:
if ok {
r <- v2
} else {
c2 = nil
}
}
}
}(c1, c2, r)
return r
}
// Ranger provides a convenient way to exit a goroutine sending values
// when the receiver stops reading them.
//
// Ranger returns a Sender and a Receiver. The Receiver provides a
// Next method to retrieve values. The Sender provides a Send method
// to send values and a Close method to stop sending values. The Next
// method indicates when the Sender has been closed, and the Send
// method indicates when the Receiver has been freed.
func Ranger[T any]() (*Sender[T], *Receiver[T]) {
c := make(chan T)
d := make(chan bool)
s := &Sender[T]{values: c, done: d}
r := &Receiver[T]{values: c, done: d}
// The finalizer on the receiver will tell the sender
// if the receiver stops listening.
runtime.SetFinalizer(r, r.finalize)
return s, r
}
// A Sender is used to send values to a Receiver.
type Sender[T any] struct {
values chan<- T
done <-chan bool
}
// Send sends a value to the receiver. It reports whether any more
// values may be sent; if it returns false the value was not sent.
func (s *Sender[T]) Send(v T) bool {
select {
case s.values <- v:
return true
case <-s.done:
// The receiver has stopped listening.
return false
}
}
// Close tells the receiver that no more values will arrive.
// After Close is called, the Sender may no longer be used.
func (s *Sender[T]) Close() {
close(s.values)
}
// A Receiver receives values from a Sender.
type Receiver[T any] struct {
values <-chan T
done chan<- bool
}
// Next returns the next value from the channel. The bool result
// reports whether the value is valid. If the value is not valid, the
// Sender has been closed and no more values will be received.
func (r *Receiver[T]) Next() (T, bool) {
v, ok := <-r.values
return v, ok
}
// finalize is a finalizer for the receiver.
// It tells the sender that the receiver has stopped listening.
func (r *Receiver[T]) finalize() {
close(r.done)
}
```
There is an example of using this function in the next section.
### Containers
One of the frequent requests for generics in Go is the ability to
write compile-time type-safe containers.
This design makes it easy to write a compile-time type-safe wrapper
around an existing container; we won't write out an example for that.
This design also makes it easy to write a compile-time type-safe
container that does not use boxing.
Here is an example of an ordered map implemented as a binary tree.
The details of how it works are not too important.
The important points are:
* The code is written in a natural Go style, using the key and value
types where needed.
* The keys and values are stored directly in the nodes of the tree,
not using pointers and not boxed as interface values.
```
// Package orderedmaps provides an ordered map, implemented as a binary tree.
package orderedmaps
import "chans"
// Map is an ordered map.
type Map[K, V any] struct {
root *node[K, V]
compare func(K, K) int
}
// node is the type of a node in the binary tree.
type node[K, V any] struct {
k K
v V
left, right *node[K, V]
}
// New returns a new map.
// Since the type parameter V is only used for the result,
// type inference does not work, and calls to New must always
// pass explicit type arguments.
func New[K, V any](compare func(K, K) int) *Map[K, V] {
return &Map[K, V]{compare: compare}
}
// find looks up k in the map, and returns either a pointer
// to the node holding k, or a pointer to the location where
// such a node would go.
func (m *Map[K, V]) find(k K) **node[K, V] {
pn := &m.root
for *pn != nil {
switch cmp := m.compare(k, (*pn).k); {
case cmp < 0:
pn = &(*pn).left
case cmp > 0:
pn = &(*pn).right
default:
return pn
}
}
return pn
}
// Insert inserts a new key/value into the map.
// If the key is already present, the value is replaced.
// Reports whether this is a new key.
func (m *Map[K, V]) Insert(k K, v V) bool {
pn := m.find(k)
if *pn != nil {
(*pn).v = v
return false
}
*pn = &node[K, V]{k: k, v: v}
return true
}
// Find returns the value associated with a key, or zero if not present.
// The bool result reports whether the key was found.
func (m *Map[K, V]) Find(k K) (V, bool) {
pn := m.find(k)
if *pn == nil {
var zero V // see the discussion of zero values, above
return zero, false
}
return (*pn).v, true
}
// keyValue is a pair of key and value used when iterating.
type keyValue[K, V any] struct {
k K
v V
}
// InOrder returns an iterator that does an in-order traversal of the map.
func (m *Map[K, V]) InOrder() *Iterator[K, V] {
type kv = keyValue[K, V] // convenient shorthand
sender, receiver := chans.Ranger[kv]()
var f func(*node[K, V]) bool
f = func(n *node[K, V]) bool {
if n == nil {
return true
}
// Stop sending values if sender.Send returns false,
// meaning that nothing is listening at the receiver end.
return f(n.left) &&
sender.Send(kv{n.k, n.v}) &&
f(n.right)
}
go func() {
f(m.root)
sender.Close()
}()
return &Iterator[K, V]{receiver}
}
// Iterator is used to iterate over the map.
type Iterator[K, V any] struct {
r *chans.Receiver[keyValue[K, V]]
}
// Next returns the next key and value pair. The bool result reports
// whether the values are valid. If the values are not valid, we have
// reached the end.
func (it *Iterator[K, V]) Next() (K, V, bool) {
kv, ok := it.r.Next()
return kv.k, kv.v, ok
}
```
This is what it looks like to use this package:
```
import "container/orderedmaps"
// Set m to an ordered map from string to string,
// using strings.Compare as the comparison function.
var m = orderedmaps.New[string, string](strings.Compare)
// Add adds the pair a, b to m.
func Add(a, b string) {
m.Insert(a, b)
}
```
### Append
The predeclared `append` function exists to replace the boilerplate
otherwise required to grow a slice.
Before `append` was added to the language, there was a function `Add`
in the bytes package:
```
// Add appends the contents of t to the end of s and returns the result.
// If s has enough capacity, it is extended in place; otherwise a
// new array is allocated and returned.
func Add(s, t []byte) []byte
```
`Add` appended two `[]byte` values together, returning a new slice.
That was fine for `[]byte`, but if you had a slice of some other
type, you had to write essentially the same code to append more
values.
If this design were available back then, perhaps we would not have
added `append` to the language.
Instead, we could write something like this:
```
// Package slices implements various slice algorithms.
package slices
// Append appends the contents of t to the end of s and returns the result.
// If s has enough capacity, it is extended in place; otherwise a
// new array is allocated and returned.
func Append[T any](s []T, t ...T) []T {
lens := len(s)
tot := lens + len(t)
if tot < 0 {
panic("Append: cap out of range")
}
if tot > cap(s) {
news := make([]T, tot, tot + tot/2)
copy(news, s)
s = news
}
s = s[:tot]
copy(s[lens:], t)
return s
}
```
That example uses the predeclared `copy` function, but that's OK, we
can write that one too:
```
// Copy copies values from t to s, stopping when either slice is
// full, returning the number of values copied.
func Copy[T any](s, t []T) int {
i := 0
for ; i < len(s) && i < len(t); i++ {
s[i] = t[i]
}
return i
}
```
These functions can be used as one would expect:
```
s := slices.Append([]int{1, 2, 3}, 4, 5, 6)
// Now s is []int{1, 2, 3, 4, 5, 6}.
slices.Copy(s[3:], []int{7, 8, 9})
// Now s is []int{1, 2, 3, 7, 8, 9}
```
This code doesn't implement the special case of appending or copying a
`string` to a `[]byte`, and it's unlikely to be as efficient as the
implementation of the predeclared function.
Still, this example shows that using this design would permit `append`
and `copy` to be written generically, once, without requiring any
additional special language features.
### Metrics
In a [Go experience
report](https://medium.com/@sameer_74231/go-experience-report-for-generics-google-metrics-api-b019d597aaa4)
Sameer Ajmani describes a metrics implementation.
Each metric has a value and one or more fields.
The fields have different types.
Defining a metric requires specifying the types of the fields.
The `Add` method takes the field types as arguments, and records an
instance of that set of fields.
The C++ implementation uses a variadic template.
The Java implementation includes the number of fields in the name of
the type.
Both the C++ and Java implementations provide compile-time type-safe
Add methods.
Here is how to use this design to provide similar functionality in
Go with a compile-time type-safe `Add` method.
Because there is no support for a variadic number of type arguments,
we must use different names for a different number of arguments, as in
Java.
This implementation only works for comparable types.
A more complex implementation could accept a comparison function to
work with arbitrary types.
```
// Package metrics provides a general mechanism for accumulating
// metrics of different values.
package metrics
import "sync"
// Metric1 accumulates metrics of a single value.
type Metric1[T comparable] struct {
mu sync.Mutex
m map[T]int
}
// Add adds an instance of a value.
func (m *Metric1[T]) Add(v T) {
m.mu.Lock()
defer m.mu.Unlock()
if m.m == nil {
m.m = make(map[T]int)
}
m.m[v]++
}
// key2 is an internal type used by Metric2.
type key2[T1, T2 comparable] struct {
f1 T1
f2 T2
}
// Metric2 accumulates metrics of pairs of values.
type Metric2[T1, T2 comparable] struct {
mu sync.Mutex
m map[key2[T1, T2]]int
}
// Add adds an instance of a value pair.
func (m *Metric2[T1, T2]) Add(v1 T1, v2 T2) {
m.mu.Lock()
defer m.mu.Unlock()
if m.m == nil {
m.m = make(map[key2[T1, T2]]int)
}
m.m[key2[T1, T2]{v1, v2}]++
}
// key3 is an internal type used by Metric3.
type key3[T1, T2, T3 comparable] struct {
f1 T1
f2 T2
f3 T3
}
// Metric3 accumulates metrics of triples of values.
type Metric3[T1, T2, T3 comparable] struct {
mu sync.Mutex
m map[key3[T1, T2, T3]]int
}
// Add adds an instance of a value triplet.
func (m *Metric3[T1, T2, T3]) Add(v1 T1, v2 T2, v3 T3) {
m.mu.Lock()
defer m.mu.Unlock()
if m.m == nil {
m.m = make(map[key3[T1, T2, T3]]int)
}
m.m[key3[T1, T2, T3]{v1, v2, v3}]++
}
// Repeat for the maximum number of permitted arguments.
```
Using this package looks like this:
```
import "metrics"
var m = metrics.Metric2[string, int]{}
func F(s string, i int) {
m.Add(s, i) // this call is type checked at compile time
}
```
This implementation has a certain amount of repetition due to the lack
of support for variadic type parameters.
Using the package, though, is easy and type safe.
### List transform
While slices are efficient and easy to use, there are occasional cases
where a linked list is appropriate.
This example primarily shows transforming a linked list of one type to
another type, as an example of using different instantiations of the
same generic type.
```
// Package lists provides a linked list of any type.
package lists
// List is a linked list.
type List[T any] struct {
head, tail *element[T]
}
// An element is an entry in a linked list.
type element[T any] struct {
next *element[T]
val T
}
// Push pushes an element to the end of the list.
func (lst *List[T]) Push(v T) {
if lst.tail == nil {
lst.head = &element[T]{val: v}
lst.tail = lst.head
} else {
lst.tail.next = &element[T]{val: v}
lst.tail = lst.tail.next
}
}
// Iterator ranges over a list.
type Iterator[T any] struct {
next **element[T]
}
// Range returns an Iterator starting at the head of the list.
func (lst *List[T]) Range() *Iterator[T] {
return &Iterator[T]{next: &lst.head}
}
// Next advances the iterator.
// It reports whether there are more elements.
func (it *Iterator[T]) Next() bool {
if *it.next == nil {
return false
}
it.next = &(*it.next).next
return true
}
// Val returns the value of the current element.
// The bool result reports whether the value is valid.
func (it *Iterator[T]) Val() (T, bool) {
if *it.next == nil {
var zero T
return zero, false
}
return (*it.next).val, true
}
// Transform runs a transform function on a list returning a new list.
func Transform[T1, T2 any](lst *List[T1], f func(T1) T2) *List[T2] {
ret := &List[T2]{}
it := lst.Range()
for {
if v, ok := it.Val(); ok {
ret.Push(f(v))
}
if !it.Next() {
break
}
}
return ret
}
```
### Dot product
A generic dot product implementation that works for slices of any
numeric type.
```
// Numeric is a constraint that matches any numeric type.
// It would likely be in a constraints package in the standard library.
type Numeric interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr |
~float32 | ~float64 |
~complex64 | ~complex128
}
// DotProduct returns the dot product of two slices.
// This panics if the two slices are not the same length.
func DotProduct[T Numeric](s1, s2 []T) T {
if len(s1) != len(s2) {
panic("DotProduct: slices of unequal length")
}
var r T
for i := range s1 {
r += s1[i] * s2[i]
}
return r
}
```
(Note: the generics implementation approach may affect whether
`DotProduct` uses FMA, and thus what the exact results are when using
floating point types.
It's not clear how much of a problem this is, or whether there is any
way to fix it.)
### Absolute difference
Compute the absolute difference between two numeric values, by using
an `Abs` method.
This uses the same `Numeric` constraint defined in the last example.
This example uses more machinery than is appropriate for the simple
case of computing the absolute difference.
It is intended to show how the common part of algorithms can be
factored into code that uses methods, where the exact definition of
the methods can vary based on the kind of type being used.
Note: the code used in this example will not work in Go 1.18.
We hope to resolve this and make it work in future releases.
```
// NumericAbs matches numeric types with an Abs method.
type NumericAbs[T any] interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr |
~float32 | ~float64 |
~complex64 | ~complex128
Abs() T
}
// AbsDifference computes the absolute value of the difference of
// a and b, where the absolute value is determined by the Abs method.
func AbsDifference[T NumericAbs[T]](a, b T) T {
d := a - b
return d.Abs()
}
```
We can define an `Abs` method appropriate for different numeric types.
```
// OrderedNumeric matches numeric types that support the < operator.
type OrderedNumeric interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr |
~float32 | ~float64
}
// Complex matches the two complex types, which do not have a < operator.
type Complex interface {
~complex64 | ~complex128
}
// OrderedAbs is a helper type that defines an Abs method for
// ordered numeric types.
type OrderedAbs[T OrderedNumeric] T
func (a OrderedAbs[T]) Abs() OrderedAbs[T] {
if a < 0 {
return -a
}
return a
}
// ComplexAbs is a helper type that defines an Abs method for
// complex types.
type ComplexAbs[T Complex] T
func (a ComplexAbs[T]) Abs() ComplexAbs[T] {
d := math.Hypot(float64(real(a)), float64(imag(a)))
return ComplexAbs[T](complex(d, 0))
}
```
We can then define functions that do the work for the caller by
converting to and from the types we just defined.
```
// OrderedAbsDifference returns the absolute value of the difference
// between a and b, where a and b are of an ordered type.
func OrderedAbsDifference[T OrderedNumeric](a, b T) T {
return T(AbsDifference(OrderedAbs[T](a), OrderedAbs[T](b)))
}
// ComplexAbsDifference returns the absolute value of the difference
// between a and b, where a and b are of a complex type.
func ComplexAbsDifference[T Complex](a, b T) T {
return T(AbsDifference(ComplexAbs[T](a), ComplexAbs[T](b)))
}
```
It's worth noting that this design is not powerful enough to write
code like the following:
```
// This function is INVALID.
func GeneralAbsDifference[T Numeric](a, b T) T {
switch (interface{})(a).(type) {
case int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr,
float32, float64:
return OrderedAbsDifference(a, b) // INVALID
case complex64, complex128:
return ComplexAbsDifference(a, b) // INVALID
}
}
```
The calls to `OrderedAbsDifference` and `ComplexAbsDifference` are
invalid, because not all the types that implement the `Numeric`
constraint can implement the `OrderedNumeric` or `Complex`
constraints.
Although the type switch means that this code would conceptually work
at run time, there is no support for writing this code at compile
time.
This is another way of expressing one of the omissions listed above:
this design does not provide for specialization.
## Acknowledgements
We'd like to thank many people on the Go team, many contributors to
the Go issue tracker, and all the people who have shared their ideas
and their feedback on earlier design drafts.
We read all of it, and we're grateful.
For this version of the proposal in particular we received detailed
feedback from Josh Bleecher-Snyder, Jon Bodner, Dave Cheney, Jaana
Dogan, Kevin Gillette, Mitchell Hashimoto, Chris Hines, Bill Kennedy,
Ayke van Laethem, Daniel Martí, Elena Morozova, Roger Peppe, and Ronna
Steinberg.
## Appendix
This appendix covers various details of the design that don't seem
significant enough to cover in earlier sections.
### Generic type aliases
A type alias may refer to a generic type, but the type alias may not
have its own parameters.
This restriction exists because it is unclear how to handle a type
alias with type parameters that have constraints.
```
type VectorAlias = Vector
```
In this case uses of the type alias will have to provide type
arguments appropriate for the generic type being aliased.
```
var v VectorAlias[int]
```
Type aliases may also refer to instantiated types.
```
type VectorInt = Vector[int]
```
### Instantiating a function
Go normally permits you to refer to a function without passing any
arguments, producing a value of function type.
You may not do this with a function that has type parameters; all type
arguments must be known at compile time.
That said, you can instantiate the function, by passing type
arguments, but you don't have to call the instantiation.
This will produce a function value with no type parameters.
```
// PrintInts is type func([]int).
var PrintInts = Print[int]
```
### Embedded type parameter
When a generic type is a struct, and the type parameter is
embedded as a field in the struct, the name of the field is the name
of the type parameter.
```
// A Lockable is a value that may be safely simultaneously accessed
// from multiple goroutines via the Get and Set methods.
type Lockable[T any] struct {
T
mu sync.Mutex
}
// Get returns the value stored in a Lockable.
func (l *Lockable[T]) Get() T {
l.mu.Lock()
defer l.mu.Unlock()
return l.T
}
// Set sets the value in a Lockable.
func (l *Lockable[T]) Set(v T) {
l.mu.Lock()
defer l.mu.Unlock()
l.T = v
}
```
### Embedded type parameter methods
When a generic type is a struct, and the type parameter is embedded as
a field in the struct, any methods of the type parameter's constraint
are promoted to be methods of the struct.
(For purposes of [selector
resolution](https://golang.org/ref/spec#Selectors), these methods are
treated as being at depth 0 of the type parameter, even if in the
actual type argument the methods were themselves promoted from an
embedded type.)
```
// NamedInt is an int with a name. The name can be any type with
// a String method.
type NamedInt[Name fmt.Stringer] struct {
Name
val int
}
// Name returns the name of a NamedInt.
func (ni NamedInt[Name]) Name() string {
// The String method is promoted from the embedded Name.
return ni.String()
}
```
### Embedded instantiated type
When embedding an instantiated type, the name of the field is the name
of type without the type arguments.
```
type S struct {
T[int] // field name is T
}
func F(v S) int {
return v.T // not v.T[int]
}
```
### Generic types as type switch cases
A generic type may be used as the type in a type assertion or as a
case in a type switch.
Here are some trivial examples:
```
func Assertion[T any](v interface{}) (T, bool) {
t, ok := v.(T)
return t, ok
}
func Switch[T any](v interface{}) (T, bool) {
switch v := v.(type) {
case T:
return v, true
default:
var zero T
return zero, false
}
}
```
In a type switch, it's OK if a generic type turns out to duplicate
some other case in the type switch.
The first matching case is chosen.
```
func Switch2[T any](v interface{}) int {
switch v.(type) {
case T:
return 0
case string:
return 1
default:
return 2
}
}
// S2a will be set to 0.
var S2a = Switch2[string]("a string")
// S2b will be set to 1.
var S2b = Switch2[int]("another string")
```
### Method sets of constraint elements
Much as the type set of an interface type is the intersection of the
type sets of the elements of the interface, the method set of an
interface type can be defined as the union of the method sets of the
elements of the interface.
In most cases, an embedded element will have no methods, and as such
will not contribute any methods to the interface type.
That said, for completeness, we'll note that the method set of `~T` is
the method set of `T`.
The method set of a union element is the intersection of the method
sets of the elements of the union.
These rules are implied by the definition of type sets, but they are
not needed for understanding the behavior of constraints.
### Permitting constraints as ordinary interface types
This is a feature we are not suggesting now, but could consider for
later versions of the language.
We have proposed that constraints can embed some additional elements.
With this proposal, any interface type that embeds anything other than
an interface type can only be used as a constraint or as an embedded
element in another constraint.
A natural next step would be to permit using interface types that
embed any type, or that embed these new elements, as an ordinary type,
not just as a constraint.
We are not proposing that now.
But the rules for type sets and method sets above describe how they
would behave.
Any type that is an element of the type set could be assigned to such
an interface type.
A value of such an interface type would permit calling any member of
the method set.
This would permit a version of what other languages call sum types or
union types.
It would be a Go interface type to which only specific types could be
assigned.
Such an interface type could still take the value `nil`, of course, so
it would not be quite the same as a typical sum type as found in other
languages.
Another natural next step would be to permit approximation elements
and union elements in type switch cases.
That would make it easier to determine the contents of an interface
type that used those elements.
That said, approximation elements and union elements are not types,
and as such could not be used in type assertions.
### Type inference for composite literals
This is a feature we are not suggesting now, but could consider for
later versions of the language.
We could also consider supporting type inference for composite
literals of generic types.
```
type Pair[T any] struct { f1, f2 T }
var V = Pair{1, 2} // inferred as Pair[int]{1, 2}
```
It's not clear how often this will arise in real code.
### Type inference for generic function arguments
This is a feature we are not suggesting now, but could consider for
later versions of the language.
In the following example, consider the call to `Find` in `FindClose`.
Type inference can determine that the type argument to `Find` is `T4`,
and from that we know that the type of the final argument must be
`func(T4, T4) bool`, and from that we could deduce that the type
argument to `IsClose` must also be `T4`.
However, the type inference algorithm described earlier cannot do
that, so we must explicitly write `IsClose[T4]`.
This may seem esoteric at first, but it comes up when passing generic
functions to generic `Map` and `Filter` functions.
```
// Differ has a Diff method that returns how different a value is.
type Differ[T1 any] interface {
Diff(T1) int
}
// IsClose returns whether a and b are close together, based on Diff.
func IsClose[T2 Differ](a, b T2) bool {
return a.Diff(b) < 2
}
// Find returns the index of the first element in s that matches e,
// based on the cmp function. It returns -1 if no element matches.
func Find[T3 any](s []T3, e T3, cmp func(a, b T3) bool) int {
for i, v := range s {
if cmp(v, e) {
return i
}
}
return -1
}
// FindClose returns the index of the first element in s that is
// close to e, based on IsClose.
func FindClose[T4 Differ](s []T4, e T4) int {
// With the current type inference algorithm we have to
// explicitly write IsClose[T4] here, although it
// is the only type argument we could possibly use.
return Find(s, e, IsClose[T4])
}
```
### Reflection on type arguments
Although we don't suggest changing the reflect package, one
possibility to consider for the future would be to add two new
methods to `reflect.Type`: `NumTypeArgument() int` would return the
number of type arguments to a type, and `TypeArgument(i) Type` would
return the i'th type argument.
`NumTypeArgument` would return non-zero for an instantiated generic
type.
Similar methods could be defined for `reflect.Value`, for which
`NumTypeArgument` would return non-zero for an instantiated generic
function.
There might be programs that care about this information.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/45713-workspace.md | # Proposal: Multi-Module Workspaces in `cmd/go`
Author(s): Michael Matloob
Last updated: 2021-04-22
Discussion at https://golang.org/issue/45713.
## Abstract
This proposal describes a new _workspace_ mode in the `go` command for editing
multiple modules. The presence of a `go.work` file in the working directory or a
containing directory will put the `go` command into workspace mode. The
`go.work` file specifies a set of local modules that comprise a workspace. When
invoked in workspace mode, the `go` command will always select these modules and
a consistent set of dependencies.
## Glossary
These terms are used often in this document. The
[Go Modules Reference](https://golang.org/ref/mod) and its
[Glossary](https://golang.org/ref/mod#glossary) provide more detail.
* ***Main*** **modules**: The module the user is working in. Before this
proposal, this is the single module containing the directory where the `go`
command is invoked. This module is used as the starting point when running
MVS. This proposal proposes allowing multiple main modules.
* ***Module version***: From the perspective of the go command, a module
version is a particular instance of a module. This can be a released version
or pseudo version of a module, or a directory with a go.mod file.
* ***Build list***: The _build list_ is the list of _module versions_ used for
a build command such as go build, go list, or go test. The build list is
determined from the main module's go.mod file and go.mod files in
transitively required modules using minimal version selection. The build
list contains versions for all modules in the module graph, not just those
relevant to a specific command.
* ***MVS*** **or** ***Minimal Version Selection***: The algorithm used to determine
the versions of all modules that will be used in a build. See the
[Minimal Version Selection](https://golang.org/ref/mod#minimal-version-selection)
section in the Go Modules Reference for more information.
* ***mode***: This document references module _mode_ and workspace _mode_. The
modes are the different ways the `go` command determines which modules and
packages it's building and how dependencies are resolved. For example the
`-mod=readonly` mode uses the versions of the modules listed in the `go.mod`
file and fails if it would need to add in a new module dependency, and the
`-mod=vendor` mode uses the modules in the `vendor` directory.
## Background
Users often want to make changes across multiple modules: for instance, to
introduce a new interface in a package in one module along with a usage of that
interface in another module. Normally, the `go` command recognizes a single
"main" module the user can edit. Other modules are read-only and are loaded from
the module cache. The `replace` directive is the exception: it allows users to
replace the resolved version of a module with a working version on disk. But
working with the replace directive can often be awkward: each module developer
might have working versions at different locations on disk, so having the
directive in a file that needs to be distributed with the module isn't a good
fit for all use cases.
[`gopls`](https://golang.org/s/gopls) offers users a convenient way to make
changes across modules without needing to manipulate replacements. When multiple
modules are opened in a `gopls` workspace, it synthesizes a single go.mod file,
called a _supermodule_ that pulls in each of the modules being worked on.
The supermodule results in a single build list allowing the tooling to surface
changes made in a dependency module to a dependent module. But this means that
`gopls` is building with a different set of versions than an invocation of the
`go` command from the command line, potentially producing different results.
Users would have a better experience if they could create a configuration that
could be used by `gopls` as well as their direct invocations of `cmd/go` and
other tools. See the
[Multi-project gopls workspaces](37720-gopls-workspaces.md) document and
proposal issues [#37720](https://golang.org/issue/37720) and
[#32394](https://golang.org/issue/32394).
### Scope
This proposal specifically tries to improve the experience in the `go` command
(and the tools using it) for working in multi-module workspaces. That means the
following are out of scope:
#### Tagging and releasing new versions of a module
This proposal does not address the problem of tagging and releasing new versions
of modules so that new versions of dependent modules depend on new versions of
the dependency modules. But these sorts of features don't belong in the `go`
command. Even so, the workspace file can be useful for a future tool or feature
that solves the tagging and releasing problem: the workspace would help the tool
know the set of modules the user is working on, and together with the module
dependency graph, the tool would be able to determine versions for the new
modules.
#### Building and testing a module with the user's configuration
It would be useful for module developers to build and test their modules with
the same build list seen by users of their modules. Unfortunately, there are
many such build lists because those build lists depend on the set of modules the
user's module requires, and the user needs to know what those modules are. So
this proposal doesn't try to solve that problem. But this proposal can make it
easier to switch between multiple configurations, which opens the door for other
tools for testing modules in different configurations.
## Proposal
### The `-workfile` flag
The new `-workfile` flag will be accepted by module-aware build commands and
most `go mod` subcommands. The following is a table of which commands can
operate in workspace mode and which can operate in module mode. Commands that
can operate in workspace mode will accept `-workfile` and follow the workspace
resolution steps below.
`go mod download`, `go mod graph`, `go mod verify` and `go mod why` all have
meanings based on the build list, so they will all work in workspace mode
according to the build list.
`go mod edit`, `go mod init` `go mod tidy` and `go mod vendor` only make sense
in a single module context, so they will ignore the workspace.
`go get` could make sense in workspace mode but not in all contexts, so it
will also ignore the workspace.
| Subcommand | Module | Workspace |
|----------------|--------|------------|
| `mod init` | o | |
| `mod initwork` | | o |
| `mod download` | o | o |
| `mod graph` | o | o |
| `mod verify` | o | o |
| `mod why` | o | o |
| `mod edit` | o | |
| `mod tidy` | o | |
|` mod vendor` | o | |
| `get` | o | |
| `install` | o | |
| `list` | o | o |
| `build` | o | o |
| `test ` | o | o |
If `-workfile` is set to `off`, workspace mode will
be disabled. If it is `auto` (the default), workspace mode will be enabled if a
file named `go.work` is found in the current directory (or any of its parent
directories), and disabled otherwise. If `-workfile` names a path to an existing
file that ends in `.work`, workspace mode will be enabled. Any other value is an
error.
If workspace mode is on, `-mod=readonly` must be specified either implicitly or
explicitly. Otherwise, the `go` command will return an error. If `-mod` is not
explicitly set and `go.work` file is found, `-mod=readonly` is set. (That is, it
takes precedence over the existence of a vendor/module.txt which would normally
imply `-mod=vendor`.)
If workspace mode is on, the `go.work` file (either named by `-workfile` or the
nearest one found when `-workfile` is `auto`) will be parsed to determine the
three parameters for workspace mode: a Go version, a list of directories, and a
list of replacements.
If workspace mode is on, the selected workspace file will show up in the `go
env` variable `GOWORK`. When not in workspace mode, `GOWORK` will be `off`.
### The `go.work` file
The following is an example of a valid `go.work` file:
```
go 1.17
use (
./baz // foo.org/bar/baz
./tools // golang.org/x/tools
)
replace golang.org/x/net => example.com/fork/net v1.4.5
```
The `go.work` file will have a similar syntax as the `go.mod` file. Restrictions
in [`go.mod` lexical elements](https://golang.org/ref/mod#go-mod-file-lexical)
still apply to the `go.work` file
The `go.work` file has three directives: the `go` directive, the `use`
directive, and the `replace` directive.
#### The `go` Directive
The `go.work` file requires a `go` directive. The `go` directive accepts a
version just as it does in a `go.mod` file. The `go` directive is used to allow
adding new semantics to the `go.work` files without breaking previous users. It
does not override go versions in individual modules.
Example:
```
go 1.17
```
#### The `use` directive
The `use` directive takes an absolute or relative path to a use
containing a `go.mod` file as an argument. The syntax of the path is the same as
directory replacements in `replace` directives. The path must be to a module
directory containing a `go.mod` file. The `go.work` file must contain at least
one `use` directive. The `go` command may optionally edit the comments on
the `use` directive when doing any operation in workspace mode to add the
module path from the directory's `go.mod` file.
Note that the `use` directive has no restriction on where the directory
is located: module directories listed in `go.work` file can be located outside
the directory the `go.work` file itself is in.
Example:
```
use (
./tools // golang.org/x/tools
./mod // golang.org/x/mod
)
```
Each directory listed (in this example `./tools` and `./mod`) refers to a single
module: the module specified by the `go.mod` file in that directory. It does
not refer to any other modules specified by `go.mod` files in subdirectories of
that directory.
The modules specified by `use` directives in the `go.work` file are the
_workspace modules_. The workspace modules will collectively be the main modules
when doing a build in workspace mode. These modules are always selected by MVS
with the version `""`, and their `replace` and `exclude` directives are applied.
#### The `replace` directive
The `replace` directive has the same syntax and semantics as the replace
directive in a `go.mod` file.
Example:
```
replace (
golang.org/x/tools => ../tools
golang.org/x/mod v0.4.1 => example.com/mymod v0.5
)
```
The `replace` directives in the `go.work` are applied in addition to and with
higher precedence than `replaces` in the workspace modules. A `replace`
directive in the `go.work` file overrides replace directives in workspace
modules applying to the same module or module version. If two or more workspace
modules replace the same module or module version with different module versions
or directories, and there is not an overriding `replace` in the `go.work` file,
the `go` command will report an error. The `go` command will report errors for
replacements of workspace modules that don't refer to the same directory as the
workspace module. If any of those exist in a workspace module replacing another
workspace module, the user will have to explicitly replace that workspace module
with its path on disk.
### Semantics of workspace mode
If workspace mode is on and the `go.work` file has valid syntax, the Go version
provided by the `go.work` file is used to control the exact behavior of
workspace mode. For the first version of Go supporting workspace mode and unless
changes are made in following versions the following semantics apply:
When doing a build operation under workspace mode the `go` command will try to
find a `go.mod` file. If a `go.mod` file is found, its containing directory must
be declared with a `use` directive in the `go.work` file. Because the
build list is determined by the workspace rather than a `go.mod` file, outside
of a module, the `go` command will proceed as normal to build any non-relative
package paths or patterns. Outside of a module, a package composed of `.go`
files listed on the command line resolves its imports according to the
workspace, and the package's imports will be resolved according to the
workspace's build list.
The `all` pattern in workspace mode resolves to the union of `all` for over the
set of workspace modules. `all` is the set of packages needed to build and test
packages in the workspace modules.
To construct the build list, each of the workspace modules are main modules and
are selected by MVS and their `replace` and `exclude` directives will be
applied. `replace` directives in the `go.work` file override the `replaces` in
the workspace modules. Similar to a single main module in module mode, each of
the main modules will have version `""`, but MVS will traverse other versions of
the main modules that are depended on by transitive module dependencies. For the
purposes of lazy loading, we load the explicit dependencies of each workspace
module when doing the deepening scan.
Module vendor directories are ignored in workspace mode because of the
requirement of `-mod=readonly`.
### Creating and editing `go.work` files
A new `go work` command will be added with the following subcommands
`go work init`, `go work use`, and `go work edit`.
`go work init` will take as arguments a (potentially empty) list of
directories it will use to write out a `go.work` file in the working directory
with a `go` statement and a `use` directive listing each of the
directories. `go work init` will take an optional `-o` flag to specify a
different output file path, which can be used to create workspace files for
other configurations.
`go work use` will take as arguments a set of arguments to use in the go.work
file. If the `-r` flag is added, recursive subdirectories of the listed directories
will also be listed in use directives. Use directives with directories that don't
exist, but that match the arguments to `go work use` will be removed from
the `go.work` file.
`go work edit` will work similarly to `go mod edit` and take the following
flags:
* `-fmt` will reformat the `go.work` file
* `-go=version` will set the file's `go` directive to `version`
* `-use=path` and `-dropuse=path` will add and drop a use
directive for the given path
* `-replace` and `-dropreplace` will work exactly as they do for `go mod edit`
### Syncing the workspace's buildlist back to workspace modules
`go work sync` pushes the module versions of dependency
modules back into the go.mod files of the dependency modules. It does this
by calculating the build list in the workspace, and then upgrading the dependencies
of the workspace's modules to the versions in the workspace buildlist. Because
of MVS the versions in the workspace must be at least the same as the versions
in each component module.
## Rationale
This proposal addresses these workflows among others:
### Workflows
#### A change in one module that requires a change in another module
One common workflow is when a user wants to add a feature in an upstream module
and make use of the feature in their own module. Currently, they might open the
two modules in their editor through gopls, which will create a supermodule
requiring and replacing both of the modules, and creating a single build list
used for both of the modules. The editor tooling and builds done through the
editor will use that build list, but the user will not have access to the
'supermodule' outside their editor: go command invocations run in their terminal
outside the editor will use a different build list. The user can change their
go.mod to add a replace, which will be reflected in both the editor and their go
command invocations, but this is a change they will need to remember to revert
before submitting.
When these changes are done often, for example because a project's code base is
split among several modules, a user might want to have a consistent
configuration used to join the modules together. In that case the user will want
to configure their editor and the `go` command to always use a single build list
when working in those modules. One way to do this is to work in a top level
module that transitively requires the others, if it exists, and replace the
dependencies. But they then need to remember to not check in the replace and
always need to run their go commands from that designated module.
##### Example
As an example, the `gopls` code base in `golang.org/x/tools/internal/lsp` might
want to add a new function to `golang.org/x/mod/modfile` package and start using
it. If the user has the `golang.org/x/mod` and `golang.org/x/tools` repos in the
same directory they might run:
```
go mod initwork ./mod ./tools
```
which will produce this file:
```
go 1.17
use (
./mod // golang.org/x/mod
./tools // golang.org/x/tools
)
```
Then they could work on the new function in `golang.org/x/mod/modfile` and its
usage in `golang.org/x/tools/internal/lsp` and when run from any directory in
the workspace the `go` command would present a consistent build list. When they
were satisfied with their change, they could release a new version of
`golang.org/x/mod`, update `golang.org/x/tools`'s `go.mod` to require the new
version of `golang.org/x/mod`, and then turn off workspace mode with
`-workfile=off` to make sure the change behaves as expected.
#### Multiple modules in the same repository that depend on each other
A further variant of the above is a module that depends on another module in the
same repository. In this case checking in go.mod files that require and replace
each other is not as much of a problem, but especially as the number of modules
grows keeping them in sync becomes more difficult. If a user wants to keep the
same build list as they move between directories so that they can continue to
test against the same configuration, they will need to make sure all the modules
replace each other, which is error prone. It would be far more convenient to
have a single configuration linking all the modules together. Of course, this
use case has the additional problem of updating the requirements on the replaced
modules in the repository. This is a case of the problem of updating version
requirements on released modules which is out of scope for this proposal.
Our goal is that when there are several tightly coupled modules in the same
repository, users would choose to create `go.work` files defining the workspace
using the modules in those repositories instead of adding `replaces` in the
`go.mod` files. Perhaps the creation of the file can be automated by an external
tool that scans for all the `go.mod` files recursively contained in a directory.
These `go.work` files should not be checked into the
repositories so that they don't override the workspaces users explicitly define.
Checking in `go.work` files could also lead to CI/CD systems not testing the
actual set of version requirements on a module and that version requirements
among the repository's modules are properly incremented to use changes in the
modules. And of course, if a repository contains only a single module, or
unrelated modules, there's not much utility to adding a `go.work` file because
each user may have a different directory structure on their computer outside of
that repository.
##### Example
As a simple example the `gopls` binary is in the module
`golang.org/x/tools/gopls` which depends on other packages in the
`golang.org/x/tools` module. Currently, building and testing the top-level
`gopls` code is done by entering the directory of the `golang.org/x/tools/gopls`
module which replaces its usage of the `golang.org/tools/module`:
```
module golang.org/x/tools/gopls
go 1.12
require (
...
golang.org/x/tools v0.1.0
...
)
replace golang.org/x/tools => ../
```
This `replace` can be removed and replaced with a `go.work` file that includes
both modules in the directory above the checkout of the `golang.org/x/tools`
```
// golang.org/x/tools/go.work
go 1.17
use (
./tools
./tools/gopls
)
```
This allows any of the tests in either module to be run from anywhere in the
repo. Of course, to release the modules, the `golang.org/x/tools` module needs
to be tagged and released, and then the `golang.org/x/gopls` module needs to
require that new release.
#### Switching between multiple configurations
Users might want to easily be able to test their modules with different
configurations of dependencies. For instance, they might want to test their
module using the development versions of the dependencies, using the build list
determined using the module as a single main module, and using a build list with
alternate versions of dependencies that are commonly used. By making a workspace
with the development versions of the dependencies and another adding the
alternative versions of the dependencies with replaces, it's easy to switch
between the three configurations.
Users who want to test using a subset of the workspace modules can also easily
comment out some of the use directives in their workspace file instead of
making separate workspace files with the appropriate subset of workspace
modules, if that works better for their workflows.
#### Workspaces in `gopls`
With this change, users will be able to configure `gopls` to use `go.work` files
describing their workspace. `gopls` can pass the workspace to the `go` command
in its invocations if it's running a version of Go that supports workspaces, or
can easily rewrite the workspace file into a supermodule for earlier versions.
The semantics of workspace mode are not quite the same as for a supermodule in
general (for instance `...` and `all` have different meanings) but are the same
or close enough for the cases that matter.
#### A `GOPATH`-like setup
While this proposal does not aim to completely recreate all `GOPATH` workflows,
it can be used to create a setup that shares some aspects of the `GOPATH` setup:
A user who is working with a set of modules in `GOPATH`, but in `GOPATH` mode
so that all dependencies are resolved from the `GOPATH` tree can add a `go.work`
file to the base of a `GOPATH` directory that lists all the modules in that
`GOPATH` (and even those in other `GOPATH` directories, if their path has
multiple elements). Then all their dependencies that are under that `GOPATH`
directory will continue to be resolved from those locations.
Of course there are caveats to this workflow: `GOPATH` packages that are not
contained in a module can't be added to the workspace, and the `go.work` file
needs to be manually maintained to add modules instead of walking a directory
tree like `GOPATH` mode does. And opting into workspace mode piecemeal by adding
modules one by one can be frustrating because the modules outside of the new
workspace will require `-modfile` to be set to `off` or another `go.work` file
that includes it. But even with these differences, used this way, `go.work` can
recreate some of the convenience of `GOPATH` while still providing the benefits
of modules.
### The `workfile` flag
One alternative that was considered for disabling module mode would be to have
module mode be an option for the `-mod` flag. `-mod=work` would be the default
and users could set any other value to turn off workspace mode. This removes the
redundant knob that exists in this proposal where workspace mode is set
independently of the `-mod` flag, but only `-mod=readonly` is allowed. The
reason this alternative was adopted for this proposal is that it could be
unintuitive and hard for users to remember to set `-mod=readonly` to turn
workspace mode off. Users might think to set `-mod=mod` to turn workspace mode
off even though they don't intend to modify their `go.mod` file.
This also avoids conflicting defaults: the existence of a `go.work` file implies
workspace mode, but the existence of `vendor/module.txt` implies `-mod=vendor`.
Separating the configurations makes it clear that the `go.work` file takes
precedence.
But regardless of the above, it's useful to have a way to specify the path to a
different `go.work` file similar to the `-modfile` flag for the same reasons
that `-modfile` exists. Given that `-workfile` exists it's natural to add a
`-workfile=off` option to turn off workspace mode.
### The `go.work` file
The configuration of multi-module workspaces is put in a file rather than being
passed through an environment variable or flag because there are multiple
parameters for configuration that would be difficult to put into a single flag
or environment variable and unwieldy to put into multiple.
The `go` command locates `go.work` files the same way it locates `go.mod` files
to make it easy for users already familiar with modules to learn the rules for
whether their current directory is in a workspace and which one.
`go.work` files allow users to operate in directories outside of any modules but
still use the workspace build list. This makes it easy for users to have a
`GOPATH`-like user experience by placing a `go.work` file in their home
directory linking their modules together.
Like the `go.mod` file, we want the format of the configuration for multi-module
workspaces to be machine writable and human-readable. Though there are other
popular configuration formats such as yaml and json, they can often be confusing
or annoying to write. The format used by the `go.mod` file is already familar to
Go programmers, and is easy for both humans and computers to read and write.
Modules are listed by the directory containing the module's `go.mod` file rather
than listing the paths to the `go.mod` files themselves to avoid the redundant
basename in every path. Alternatively, if the `go.mod` files were listed
directly it would be more clear that directories aren't being searched for all
modules contained under them but rather refer to a single module. Modules are
required to be listed explicitly instead of allowing for patterns that match
all modules under a directory because those entries would require slow directory
walks each time the `go` command would need to load a workspace. Because
a module's path is not always clear from its directory name, we will allow the
go command add comments on the `use` directive with the module path.
Requiring the directories listed in the `go.work` file to have `go.mod` files
means that projects without `go.mod` files can't be added to a workspace even
though they can be required as implicit modules in `go.mod` files. To support
these we would have to add to the `go.work` file some way of associating the
directories with `go.mod` files. But these projects are already getting more
rare and the missing `go.mod` can be worked around by adding a temporary
`go.mod` file to the project's directory.
The naming of the `go` and `replace` directives is straightforward:
they are the same as in `go.mod`. The `use` directive is called `use`
because it causes the go.work file to use a directory as a main
module. Using `module` to list the module directories could be
confusing because there is already a module directive in `go.mod` that
has a different meaning. On the other hand, names like `modvers` and
`moddir` are awkward.
`go.work` files should not be checked into version control repos containing
modules so that the `go.work` file in a module does not end up overriding
the configuration a user created themselves outside of the module. The `go.work`
documentation should contain clear warnings about this.
### Semantics of workspace mode
A single build list is constructed from the set of workspace modules to give
developers consistent results wherever they are in their workspace. Further, the
single build list allows tooling to present a consistent view of the workspace,
so that editor operations and information doesn't change in surprising ways when
moving between files.
`replace` directives are respected when building the build list because many
modules already have many `replace`s in them that are necessary to properly
build them. Not respecting them would break users unnecessarily. `replace`
directives exist in the workspace file to allow for resolving conflicts between
`replace`s in workspace modules. Because all workspace modules exist as
co-equals in the workspace, there is no clear and intuitive way to resolve
`replace` conflicts without explicit input from the user. One alternative is
to add special syntax for overriding replaces to make the overriding behavior
more explicit, and an additional option is to add an option to add syntax to
nullify replaces without overriding them.
Working in modules not listed in the workspace file is disallowed to avoid what
could become a common source of confusion: if the `go` command stayed in
workspace mode, it's possible that a command line query could resolve to a
different version of the module the directory contains. Users could be confused
about a `go build` or `go list` command completing successfully but not
respecting changes made in the current module. On the other hand, a user could
be confused about the go command implicitly ignoring the workspace if they
intended the current module to be in the workspace. It is better to make the
situation clear to the user to allow them either to add the current module to
the workspace or explicitly turn workspace mode off according to their
preference.
Module vendoring is ignored in workspace mode because it is not clear which
modules' vendor directories should be respected if there are multiple workpace
modules with vendor directories containing the same dependencies. Worse, if
module A vendors example.com/foo/pkg@A and module B vendors
example.com/foo/sub/pkg@v0.2.0, then a workspace that combines A and B would
select example.com/foo v0.2.0 in the overall build list, but would not have any
vendored copy of example.com/foo/pkg for that version. As the modules spec says,
"Vendoring may be used to allow interoperation with older versions of Go, or to
ensure that all files used for a build are stored in a single file tree.".
Because developers in workspace mode are necessarily not using an older version
of Go, and the build list used by the workspace is different than that used in
the module, vendoring is not as useful for workspaces as it is for individual
modules.
### `go.work.sum` files
The `go` command will use the collective set of `go.sum` files that exist across
the workspace modules to verify dependency modules, but there are cases where
the `go.sum` files in the workspace modules collectively do not contain all sums
needed to verify the build: The simpler case is if the workspace go.mod files
themselves are incomplete, the `go` command will add missing sums to the
workspace's `go.work.sum` file rather than to the module's `go.sum`. But even
if all workspace `go.sum` files are complete, they may still not contain all
necessary sums:
> If the workspace includes modules `X` and `Y`, and `X` imports a package from
> `example.com/foo@v1.0.0`, and `Y` has a transitive requirement on
> `example.com/foo@v1.1.0` (but does not import any packages from it), then
> `X/go.sum` will contain a checksum only for `v1.0.0/go.sum` and `v1.0.0`, and
> `Y` will contain a checksum only for `v1.1.0/go.sum`. No individual module
> will have a checksum for the source code for `v1.1.0`, because no module in
> isolation actually uses that source code.
### Creating and editing `go.work` files
The `go work init` and `go work edit` subcommands are being added for the
same reasons that the go `go mod init` and `go mod edit` commands exist: they
make it more convenient to create and edit `go.work` files. The names are
awkward, but it's not clear that it would be worth making the commands named `go
work init` and `go work edit` if `go work` would only have two subcommands.
### Syncing requirements in workspace back to `go.mod` files
`go work sync` allows users to eliminate divergence between the build list used
when developing and the build lists users will see when working within
the individual modules separately.
## Compatibility
Tools based on the go command, either directly through `go list` or via
`golang.org/x/tools/go/packages` will work without changes with workspaces.
This change does not affect the Go language or its core libraries. But we would
like to maintain the semantics of a `go.work` file across versions of Go to
avoid causing unnecessary churn and surprise for users.
This is why all valid `go.work` files provide a Go version. Newer versions of Go
will continue to respect the workspace semantics of the version of Go listed in
the `go.work` file. This will make it possible (if necessary) to make changes in
the of workspace files in future versions of Go for users who create new
workspaces or explicitly increase the Go version of their `go.work` file.
## Implementation
The implementation for this would all be in the `go` command. It would need to
be able to read `go.work` files, which we could easily implement reusing parts
of the `go.mod` parser. We would need to add the new `-workfile flag` to the Go
command and modify the `go` command to look for the `go.work` file to determine
if it's in workspace mode. The most substantial part of the implementation would
be to modify the module loader to be able to accept multiple main modules rather
than a single main module, and run MVS with the multiple main modules when it is
in workspace mode.
To avoid issues with the release cycle, if the implementation is not finished
before a release, the behavior to look for a `go.work` file and to turn on
workspace mode can be guarded behind a `GOEXPERIMENT`. Without the experiment
turned on it will be possible to work on the implementation even if it can't be
completed in time because it will never be active in the release. We could also
set the `-workfile` flag's default to `off` in the first version and change it
to its automatic behavior later.
## Related issues
### [#32394](https://golang.org/issue/32394) x/tools/gopls: support multi-module workspaces
Issue [#32394](https://golang.org/issue/32394) is about `gopls`' support for
multi-module workspaces. `gopls` currently allows users to provide a "workspace
root" which is a directory it searches for `go.mod` files to build a supermodule
from. Alternatively, users can create a `gopls.mod` file in their workspace root
that `gopls` will use as its supermodule. This proposal creates a concept
of a workspace that is similar to that `gopls` that is understood by the `go`
command so that users can have a consistent configuration across their editor
and direct invocations of the `go` command.
### [#44347](https://golang.org/issue/44347) proposal: cmd/go: support local experiments with interdependent modules; then retire GOPATH
Issue [#44347](https://golang.org/issue/44347) proposes adding a `GOTINKER`
mode to the `go` command. Under the proposal, if `GOTINKER` is set to a
directory, the `go` command will resolve import paths and dependencies in
modules by looking first in a `GOPATH`-structured tree under the `GOTINKER`
directory before looking at the module cache. This would allow users who want
to have a `GOPATH` like workflow to build a `GOPATH` at `GOTINKER`, but still
resolve most of their dependencies (those not in the `GOTINKER` tree) using
the standard module resolution system. It also provides for a multi-module
workflow for users who put their modules under `GOTINKER` and work in those
modules.
This proposal also tries to provide some aspects of the `GOPATH` workflow and
to help with multi-module workflows. A user could put the modules that they
would put under `GOTINKER` in that proposal into their `go.work` files to get
a similar experience to the one they'd get under the `GOTINKER` proposal. A
major difference between the proposals is that in `GOTINKER` modules would be
found by their paths under the `GOTINKER` tree instead of being explicitly
listed in the `go.work` file. But both proposals provide for a set of replaced
module directories that take precedence over the module versions that would
normally be resolved by MVS, when working in any of those modules.
### [#26640](https://golang.org/issue/26640) cmd/go: allow go.mod.local to contain replace/exclude lines
The issue of maintaining user-specific replaces in `go.mod` files was brought up
in [#26640](https://golang.org/issue/26640). It proposes an alternative
`go.mod.local` file so that local changes to the go.mod file could be made
adding replaces without needing to risk local changes being committed in
`go.mod` itself. The `go.work` file provides users a place to put many of the
local changes that would be put in the proposed `go.mod.local` file.
### [#39005](https://github.com/golang/go/issues/39005) proposal: cmd/go: introduce a build configurations file
Issue [#39005](https://github.com/golang/go/issues/39005) proposes to add a
mechanism to specify configurations for builds, such as build tags. This issue
is similar in that it is a proposal for additional configuration outside the
`go.mod` file. This proposal does not advocate for adding this type of
information to `go.work` and is focused on making changes across multiple
modules.
## Open issues
### Clearing `replace`s
We might want to add a mechanism to ignore all replaces of a module or module
version.
For example one module in the workspace could have `replace example.com/foo =>
example.com/foo v0.3.4` because v0.4.0 would be selected otherwise and they
think it's broken. Another module in the workspace could have
`require example.com/foo v0.5.0` which fixes the incompatibilities and also adds
some features that are necessary.
In that case, the user might just want to knock the replacements away, but they
might not want to remove the existing replacements for policy reasons (or
because the replacement is actually in a separate repo).
### Preventing `go.work` files from being checked in to repositories
`go.work` files that checked into repositories would cause confusion for Go
users because they change the build configuration without the user explicitly
opting in. Because of this they should be strongly discouraged. Though it's
not clear that the Go tool should enforce this, other tools that vet
repositories and releases should output warnings or errors for repositories
containing `go.work` files. There may also be other mechanisms not yet
considered in this document to discourage checked-in `go.work` files.
### Setting the `GOWORK` environment variable instead of `-workfile`
`GOWORK` can't be set by users because we don't want there to be ambiguity about
how to enter workspace mode, but an alternative could be to use an environment
variable instead of the `-workfile` flag to change the location of the workspace
file. Note that with the proposal as is, `-workfile` may be set in `GOFLAGS`,
and that may be persisted with `go env -w`. Developers won't need to type it out
every time.
### Patterns and Anti-Patterns
If this proposal is accepted, before it is released the documentation should
specify a set of patterns and anti-patterns and how to achieve certain workflows
using workspaces. For instance, it should mention that single-module
repositories should rarely contain `go.work` files.
## Future work
### Versioning and releasing dependent modules
As mentioned above, this proposal does not try to solve the problem of
versioning and releasing modules so that new versions of dependent modules
depend on new versions of the dependency modules. A tool built in the future can
use the current workspace as well as the set of dependencies in the module graph
to automate this work.
### Listing the module versions in the workspace
While modules have a single file listing all their root dependencies, the set of
workspaces' root dependencies is split among many files, and the same is true
of the set of replaces. It may be helpful to add a command to list the effective
set of root dependencies and replaces and which go.mod file each of them comes
from.
Perhaps there could be a command named `go mod workstatus` that gives an
overview of the status of the modules in the workspace.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/25719-go15vendor.md | # Go 1.5 Vendor Experiment
Russ Cox\
based on work by Keith Rarick\
July 2015
[_golang.org/s/go15vendor_](https://golang.org/s/go15vendor)
This document is a revised copy of [https://groups.google.com/forum/#!msg/golang-dev/74zjMON9glU/4lWCRDCRZg0J](https://groups.google.com/forum/#!msg/golang-dev/74zjMON9glU/4lWCRDCRZg0J). See that link for the full mailing list thread and context.
This document was formerly stored on Google Docs at [https://docs.google.com/document/d/1Bz5-UB7g2uPBdOx-rw5t9MxJwkfpx90cqG9AFL0JAYo/edit](https://docs.google.com/document/d/1Bz5-UB7g2uPBdOx-rw5t9MxJwkfpx90cqG9AFL0JAYo/edit).
## Proposal
Based on Keith’s earlier proposal, we propose that, as an experiment for Go 1.5, we add a temporary vendor mode that causes the go command to add these semantics:
> If there is a source directory d/vendor, then, when compiling a source file within the subtree rooted at d, import "p" is interpreted as import "d/vendor/p" if that path names a directory containing at least one file with a name ending in “.go”.
>
> When there are multiple possible resolutions, the most specific (longest) path wins.
>
> The short form must always be used: no import path can contain “/vendor/” explicitly.
>
> Import comments are ignored in vendored packages.
The interpretation of an import depends only on where the source code containing that import sits in the tree.
This proposal uses the word “vendor” instead of “external”, because (1) there is at least one popular vendoring tool (gb) that uses “vendor” and none that we know of that use “external”; (2) “external” sounds like the opposite of “internal”, which is not the right meaning; and (3) in discussions, everyone calls the broader topic vendoring. It would be nice not to bikeshed the name.
As an aside, the terms “internal vendoring” and “external vendoring” have been introduced into some discussions, to make the distinction between systems that rewrite import paths and systems that do not. With the addition of vendor directories to the go command, we hope that this distinction will fade into the past. There will just be vendoring.
**Update, January 2016**: These rules do not apply to the “C” pseudo-package, which is processed earlier than normal import processing. They do, however, apply to standard library packages. If someone wants to vendor (and therefore hide the standard library version of) “math” or even “unsafe”, they can.
**Update, January 2016**: The original text of the first condition above read “as import "d/vendor/p" if that exists”. It has been adjusted to require that the path name a directory containing at least one file with a name ending in .go, so that it is possible to vendor a/b/c without having the parent directory vendor/a/b hide the real a/b.
## Example
The gb project ships an example project called gsftp. It has a gsftp program with three dependencies outside the standard library: golang.org/x/crypto/ssh, golang.org/x/crypto/ssh/agent, and github.com/pkg/sftp.
Adjusting that example to use the new vendor directory, the source tree would look like:
$GOPATH
| src/
| | github.com/constabulary/example-gsftp/
| | | cmd/
| | | | gsftp/
| | | | | main.go
| | | vendor/
| | | | github.com/pkg/sftp/
| | | | golang.org/x/crypto/ssh/
| | | | | agent/
The file github.com/constabulary/example-gsftp/cmd/gsftp/main.go says:
import (
...
"golang.org/x/crypto/ssh"
"golang.org/x/crypto/ssh/agent"
"github.com/pkg/sftp"
)
Because github.com/constabulary/example-gsftp/vendor/golang.org/x/crypto/ssh exists and the file being compiled is within the subtree rooted at github.com/constabulary/example-gsftp (the parent of the vendor directory), the source line:
import "golang.org/x/crypto/ssh"
is compiled as if it were:
import "github.com/constabulary/example-gsftp/vendor/golang.org/x/crypto/ssh"
(but this longer form is never written).
So the source code in github.com/constabulary/example-gsftp depends on the vendored copy of golang.org/x/crypto/ssh, not one elsewhere in $GOPATH.
In this example, all the dependencies needed by gsftp are (recursively) supplied in the vendor directory, so “go install” does not read any source files outside the gsftp Git checkout. Therefore the gsftp build is reproducible given only the content of the gsftp Git repo and not any other code. And the dependencies need not be edited when copying them into the gsftp repo. And potential users can run “go get github.com/constabulary/example-gsftp/cmd/gsftp” without needing to have an additional vendoring tool installed or special GOPATH configuration.
The point is that adding just the vendor directory mechanism to the go command allows other tools to achieve their goals of reproducible builds and not modifying vendored source code while still remaining compatible with plain “go get”.
## Discussion
There are a few points to note about this.
The first, most obvious, and most serious is that the resolution of an import must now take into account the location where that import path was found. This is a fundamental implication of supporting vendoring that does not modify source code. However, the resolution of an import already needed to take into account the current GOPATH setting, so import paths were never absolute. This proposal allows the Go community to move from builds that require custom GOPATH configuration beforehand to builds that just work, because the (more limited) configuration is inferred from the conventions of the source file tree. This approach is also in keeping with the rest of the go command.
The second is that this does not attempt to solve the problem of vendoring resulting in multiple copies of a package being linked into a single binary. Sometimes having multiple copies of a library is not a problem; sometimes it is. At least for now, it doesn’t seem that the go command should be in charge of policing or solving that problem.
The final point is that existing tools like godep, nut, and gb will need to change their file tree layouts if they want to take advantage of compatibility with “go get”. However, compatibility with “go get” used to be impossible. Also, combined with eventual agreement on the vendor-spec, it should be possible for the tools themselves to interoperate.
## Deployment
The signals from the Go community are clear: the standard go command must support building source trees in which dependencies have been vendored (copied) without modifications to import paths.
We are well into the Go 1.5 cycle, so caution is warranted. If we put off making any changes, then vendoring tools and “go get” will remain incompatible for the next eight months. On the other hand, if we can make a small, targeted change, then vendoring tools can spend the next eight months experimenting and innovating and possibly converging on a common file tree layout compatible with the go command.
We believe that the experimental proposal above is that small, targeted change. It seems to be the minimal adjustment necessary to support fetching and building vendored, unmodified source code with “go get”. There are many possible extensions or complications we might consider, but for Go 1.5 we want to do as little as possible while remaining useful.
This change will only be enabled if the go command is run with GO15VENDOREXPERIMENT=1 in its environment. The use of the environment variable makes it easy to opt in without rewriting every invocation of the go command.
The new semantics changes the meaning of (breaks) source trees containing directories already named “vendor”. Of the over 60,000 listed on godoc.org, there are fewer than 50 such examples. Putting the new semantics behind the environment variable avoids breaking those trees for now.
If we decide that the vendor behavior is correct, then in a later release (possibly Go 1.6) we would make the vendor behavior default on. Projects containing “vendor” directories could still use “GO15VENDOREXPERIMENT=0” to get the old behavior while they convert their code. In a still later release (possibly Go 1.7) we would remove the use of the environment variable, locking in the vendoring semantics.
Code inside vendor/ subtrees is not subject to import path checking.
The environment variable also enables fetching of git submodules during “go get”. This is meant to allow experiments to understand whether git submodules are an appropriate and useful way to vendor code.
Note that when “go get” fetches a new dependency it never places it in the vendor directory. In general, moving code into or out of the vendor directory is the job of vendoring tools, not the go command.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/56345-structured-logging.md | # Proposal: Structured Logging
Author: Jonathan Amsterdam
Date: 2022-10-19
Issue: https://go.dev/issue/56345
Discussion: https://github.com/golang/go/discussions/54763
Preliminary implementation: https://go.googlesource.com/exp/+/refs/heads/master/slog
Package documentation: https://pkg.go.dev/golang.org/x/exp/slog
We propose adding structured logging with levels to the standard library, to
reside in a new package with import path `log/slog`.
Structured logging is the ability to output logs with machine-readable
structure, typically key-value pairs, in addition to a human-readable message.
Structured logs can be parsed, filtered, searched and analyzed faster and more
reliably than logs designed only for people to read.
For many programs that aren't run directly by a user, like servers, logging is
the main way for developers to observe the detailed behavior of the system, and
often the first place they go to debug it.
Logs therefore tend to be voluminous, and the ability to search and filter them
quickly is essential.
In theory, one can produce structured logs with any logging package:
```
log.Printf(`{"message": %q, "count": %d}`, msg, count)
```
In practice, this is too tedious and error-prone, so structured logging packages
provide an API for expressing key-value pairs.
This proposal contains such an API.
We also propose generalizing the logging "backend."
The `log` package provides control only over the `io.Writer` that logs are
written to.
In the new package, every logger has a handler that can process a log event
however it wishes.
Although it is possible to have a structured logger with a fixed backend (for
instance, [zerolog] outputs only JSON), having a flexible backend provides
several benefits: programs can display the logs in a variety of formats, convert
them to an RPC message for a network logging service, store them for later
processing, and add to or modify the data.
Lastly, the design incorporates levels in a way that accommodates both
traditional named levels and [logr]-style verbosities.
The goals of this design are:
- Ease of use.
A survey of the existing logging packages shows that programmers
want an API that is light on the page and easy to understand.
This proposal adopts the most popular way to express key-value pairs:
alternating keys and values.
- High performance.
The API has been designed to minimize allocation and locking.
It provides an alternative to alternating keys and values that is
more cumbersome but faster (similar to [Zap]'s `Field`s).
- Integration with runtime tracing.
The Go team is developing an improved runtime tracing system.
Logs from this package will be incorporated seamlessly
into those traces, giving developers the ability to correlate their program's
actions with the behavior of the runtime.
## What does success look like?
Go has many popular structured logging packages, all good at what they do.
We do not expect developers to rewrite their existing third-party structured
logging code to use this new package.
We expect existing logging packages to coexist with this one for the foreseeable
future.
We have tried to provide an API that is pleasant enough that users will prefer it to existing
packages in new code, if only to avoid a dependency.
(Some developers may find the runtime tracing integration compelling.)
We also expect newcomers to Go to encounter this package before
learning third-party packages, so they will likely be most familiar with it.
But more important than any traction gained by the "frontend" is the promise of
a common "backend."
An application with many dependencies may find that it has linked in many
logging packages.
When all the logging packages support the standard handler interface proposed here,
then the application can create a single handler and install it once
for each logging library to get consistent logging across all its dependencies.
Since this happens in the application's main function, the benefits of a unified
backend can be obtained with minimal code churn.
We expect that this proposal's handlers will be implemented for all popular logging
formats and network protocols, and that every common logging framework will
provide a shim from their own backend to a handler.
Then the Go logging community can work together to build high-quality backends
that all can share.
## Prior Work
The existing `log` package has been in the standard library since the release of
Go 1 in March 2012. It provides formatted logging, but not structured logging or
levels.
[Logrus](https://github.com/Sirupsen/logrus), one of the first structured
logging packages, showed how an API could add structure while preserving the
formatted printing of the `log` package. It uses maps to hold key-value pairs,
which is relatively inefficient.
[Zap] grew out of Uber's frustration with the slow log times of their
high-performance servers. It showed how a logger that avoided allocations could
be very fast.
[Zerolog] reduced allocations even further, but at the cost of reducing the
flexibility of the logging backend.
All the above loggers include named levels along with key-value pairs. [Logr]
and Google's own [glog] use integer verbosities instead of named levels,
providing a more fine-grained approach to filtering high-detail logs.
Other popular logging packages are Go-kit's
[log](https://pkg.go.dev/github.com/go-kit/log), HashiCorp's [hclog], and
[klog](https://github.com/kubernetes/klog).
## Design
### Overview
Here is a short program that uses some of the new API:
```
import "log/slog"
func main() {
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stderr)))
slog.Info("hello", "name", "Al")
slog.Error("oops", "err", net.ErrClosed, "status", 500)
slog.LogAttrs(slog.LevelError, "oops",
slog.Any("err", net.ErrClosed), slog.Int("status", 500))
}
```
This program generates the following output on standard error:
```
time=2022-10-24T16:05:48.054-04:00 level=INFO msg=hello name=Al
time=2022-10-24T16:05:48.054-04:00 level=ERROR err="use of closed network connection" msg=oops status=500
time=2022-10-24T16:05:48.054-04:00 level=ERROR err="use of closed network connection" msg=oops status=500
```
It begins by setting the default logger to one that writes log records in an
easy-to-read format similar to [logfmt].
(There is also a built-in handler for JSON.)
If the `slog.SetDefault` line is omitted,
the output is sent to the standard log package,
producing mostly structured output:
```
2022/10/24 16:07:00 INFO hello name=Al
2022/10/24 16:07:00 ERROR oops err="use of closed network connection" status=500
2022/10/24 16:07:00 ERROR oops err="use of closed network connection" status=500
```
The program outputs three log messages augmented with key-value pairs.
The first logs at the Info level, passing a single key-value pair along with the
message.
The second logs at the Error level, passing two key-value pairs.
The third produces the same output as the second, but more efficiently.
Functions like `Any` and `Int` construct `slog.Attr` values, which are key-value
pairs that avoid memory allocation for most values.
### Main Types
The `slog` package contains three main types:
- `Logger` is the frontend, providing output methods like `Info` and `LogAttrs` that
developers call to produce logs.
- Each call to a `Logger` output method creates a `Record`.
- The `Record` is passed to a `Handler` for output.
We cover these bottom-up, beginning with `Handler`.
### Handlers
A `Handler` describes the logging backend.
It handles log records produced by a `Logger`.
A typical handler may print log records to standard error, or write them to
a file, database or network service, or perhaps augment them with additional attributes and
pass them on to another handler.
```
type Handler interface {
// Enabled reports whether the handler handles records at the given level.
// The handler ignores records whose level is lower.
// It is called early, before any arguments are processed,
// to save effort if the log event should be discarded.
// The Logger's context is passed so Enabled can use its values
// to make a decision. The context may be nil.
Enabled(context.Context, Level) bool
// Handle handles the Record.
// It will only be called if Enabled returns true.
//
// The first argument is the context of the Logger that created the Record,
// which may be nil.
// It is present solely to provide Handlers access to the context's values.
// Canceling the context should not affect record processing.
// (Among other things, log messages may be necessary to debug a
// cancellation-related problem.)
//
// Handle methods that produce output should observe the following rules:
// - If r.Time is the zero time, ignore the time.
// - If an Attr's key is the empty string and the value is not a group,
// ignore the Attr.
// - If a group's key is empty, inline the group's Attrs.
// - If a group has no Attrs (even if it has a non-empty key),
// ignore it.
Handle(ctx context.Context, r Record) error
// WithAttrs returns a new Handler whose attributes consist of
// both the receiver's attributes and the arguments.
// The Handler owns the slice: it may retain, modify or discard it.
WithAttrs(attrs []Attr) Handler
// WithGroup returns a new Handler with the given group appended to
// the receiver's existing groups.
// The keys of all subsequent attributes, whether added by With or in a
// Record, should be qualified by the sequence of group names.
//
// How this qualification happens is up to the Handler, so long as
// this Handler's attribute keys differ from those of another Handler
// with a different sequence of group names.
//
// A Handler should treat WithGroup as starting a Group of Attrs that ends
// at the end of the log event. That is,
//
// logger.WithGroup("s").LogAttrs(level, msg, slog.Int("a", 1), slog.Int("b", 2))
//
// should behave like
//
// logger.LogAttrs(level, msg, slog.Group("s", slog.Int("a", 1), slog.Int("b", 2)))
//
// If the name is empty, WithGroup returns the receiver.
WithGroup(name string) Handler
}
```
The `slog` package provides two handlers, one for simple textual output and one
for JSON. They are described in more detail below.
### The `Record` Type
A Record holds information about a log event.
```
type Record struct {
// The time at which the output method (Log, Info, etc.) was called.
Time time.Time
// The log message.
Message string
// The level of the event.
Level Level
// The program counter at the time the record was constructed, as determined
// by runtime.Callers. If zero, no program counter is available.
//
// The only valid use for this value is as an argument to
// [runtime.CallersFrames]. In particular, it must not be passed to
// [runtime.FuncForPC].
PC uintptr
// Has unexported fields.
}
```
Records have two methods for accessing the sequence of `Attr`s. This API allows
an efficient implementation of the `Attr` sequence that avoids copying and
minimizes allocation.
```
func (r Record) Attrs(f func(Attr))
Attrs calls f on each Attr in the Record.
func (r Record) NumAttrs() int
NumAttrs returns the number of attributes in the Record.
```
So that other logging backends can wrap `Handler`s, it is possible to construct
a `Record` directly and add attributes to it:
```
func NewRecord(t time.Time, level Level, msg string, pc uintptr) Record
NewRecord creates a Record from the given arguments. Use Record.AddAttrs to
add attributes to the Record.
NewRecord is intended for logging APIs that want to support a Handler as a
backend.
func (r *Record) AddAttrs(attrs ...Attr)
AddAttrs appends the given Attrs to the Record's list of Attrs. It resolves
the Attrs before doing so.
func (r *Record) Add(args ...any)
Add converts the args to Attrs as described in Logger.Log, then appends the
Attrs to the Record's list of Attrs. It resolves the Attrs before doing so.
```
Copies of a `Record` share state. A `Record` should not be modified after
handing out a copy to it. Use `Clone` for that:
```
func (r Record) Clone() Record
Clone returns a copy of the record with no shared state. The original record
and the clone can both be modified without interfering with each other.
```
### The `Attr` and `Value` Types
An `Attr` is a key-value pair.
```
type Attr struct {
Key string
Value Value
}
```
There are convenience functions for constructing `Attr`s with various value
types, as well as `Equal` and `String` methods.
```
func Any(key string, value any) Attr
Any returns an Attr for the supplied value. See Value.AnyValue for how
values are treated.
func Bool(key string, v bool) Attr
Bool returns an Attr for a bool.
func Duration(key string, v time.Duration) Attr
Duration returns an Attr for a time.Duration.
func Float64(key string, v float64) Attr
Float64 returns an Attr for a floating-point number.
func Group(key string, as ...Attr) Attr
Group returns an Attr for a Group Value. The caller must not subsequently
mutate the argument slice.
Use Group to collect several Attrs under a single key on a log line, or as
the result of LogValue in order to log a single value as multiple Attrs.
func Int(key string, value int) Attr
Int converts an int to an int64 and returns an Attr with that value.
func Int64(key string, value int64) Attr
Int64 returns an Attr for an int64.
func String(key, value string) Attr
String returns an Attr for a string value.
func Time(key string, v time.Time) Attr
Time returns an Attr for a time.Time. It discards the monotonic portion.
func Uint64(key string, v uint64) Attr
Uint64 returns an Attr for a uint64.
func (a Attr) Equal(b Attr) bool
Equal reports whether a and b have equal keys and values.
func (a Attr) String() string
```
A `Value` can represent any Go value, but unlike type `any`, it can represent
most small values without an allocation.
In particular, integer types and strings, which account for the vast
majority of values in log messages, do not require allocation.
The default version of `Value` uses package `unsafe` to store any value in three
machine words.
The version without `unsafe` requires five.
There are constructor functions for common types, and a general one,
`AnyValue`, that dispatches on its argument type.
```
type Value struct {
// Has unexported fields.
}
func AnyValue(v any) Value
AnyValue returns a Value for the supplied value.
Given a value of one of Go's predeclared string, bool, or (non-complex)
numeric types, AnyValue returns a Value of kind String, Bool, Uint64, Int64,
or Float64. The width of the original numeric type is not preserved.
Given a time.Time or time.Duration value, AnyValue returns a Value of kind
TimeKind or DurationKind. The monotonic time is not preserved.
For nil, or values of all other types, including named types whose
underlying type is numeric, AnyValue returns a value of kind AnyKind.
func BoolValue(v bool) Value
BoolValue returns a Value for a bool.
func DurationValue(v time.Duration) Value
DurationValue returns a Value for a time.Duration.
func Float64Value(v float64) Value
Float64Value returns a Value for a floating-point number.
func GroupValue(as ...Attr) Value
GroupValue returns a new Value for a list of Attrs. The caller must not
subsequently mutate the argument slice.
func Int64Value(v int64) Value
Int64Value returns a Value for an int64.
func IntValue(v int) Value
IntValue returns a Value for an int.
func StringValue(value string) Value
String returns a new Value for a string.
func TimeValue(v time.Time) Value
TimeValue returns a Value for a time.Time. It discards the monotonic
portion.
func Uint64Value(v uint64) Value
Uint64Value returns a Value for a uint64.
```
Extracting Go values from a `Value` is reminiscent of `reflect.Value`: there is
a `Kind` method that returns an enum of type `Kind`, and a method for each `Kind`
that returns the value or panics if it is the wrong kind.
```
type Kind int
Kind is the kind of a Value.
const (
AnyKind Kind = iota
BoolKind
DurationKind
Float64Kind
Int64Kind
StringKind
TimeKind
Uint64Kind
GroupKind
LogValuerKind
)
func (v Value) Any() any
Any returns v's value as an any.
func (v Value) Bool() bool
Bool returns v's value as a bool. It panics if v is not a bool.
func (a Value) Duration() time.Duration
Duration returns v's value as a time.Duration. It panics if v is not a
time.Duration.
func (v Value) Equal(w Value) bool
Equal reports whether v and w have equal keys and values.
func (v Value) Float64() float64
Float64 returns v's value as a float64. It panics if v is not a float64.
func (v Value) Group() []Attr
Group returns v's value as a []Attr. It panics if v's Kind is not GroupKind.
func (v Value) Int64() int64
Int64 returns v's value as an int64. It panics if v is not a signed integer.
func (v Value) Kind() Kind
Kind returns v's Kind.
func (v Value) LogValuer() LogValuer
LogValuer returns v's value as a LogValuer. It panics if v is not a
LogValuer.
func (v Value) Resolve() Value
Resolve repeatedly calls LogValue on v while it implements LogValuer, and
returns the result. If the number of LogValue calls exceeds a threshold, a
Value containing an error is returned. Resolve's return value is guaranteed
not to be of Kind LogValuerKind.
func (v Value) String() string
String returns Value's value as a string, formatted like fmt.Sprint.
Unlike the methods Int64, Float64, and so on, which panic if v is of the
wrong kind, String never panics.
func (v Value) Time() time.Time
Time returns v's value as a time.Time. It panics if v is not a time.Time.
func (v Value) Uint64() uint64
Uint64 returns v's value as a uint64. It panics if v is not an unsigned
integer.
```
#### The LogValuer interface
A LogValuer is any Go value that can convert itself into a Value for
logging.
This mechanism may be used to defer expensive operations until they are
needed, or to expand a single value into a sequence of components.
```
type LogValuer interface {
LogValue() Value
}
```
`Value.Resolve` can be used to call the `LogValue` method.
```
func (v Value) Resolve() Value
Resolve repeatedly calls LogValue on v while it implements LogValuer, and
returns the result. If the number of LogValue calls exceeds a threshold, a
Value containing an error is returned. Resolve's return value is guaranteed
not to be of Kind LogValuerKind.
```
The Attrs passed to a `Handler.WithAttrs`, and the Attrs obtained
via `Record.Attrs`, have already been resolved, that is, replaced
with a call to `Resolve`.
As an example of `LogValuer`, a type could obscure its value in log output like
so:
```
type Password string
func (p Password) LogValue() slog.Value {
return slog.StringValue("REDACTED")
}
```
### Loggers
A Logger records structured information about each call to its Log, Debug,
Info, Warn, and Error methods. For each call, it creates a `Record` and passes
it to a `Handler`.
```
type Logger struct {
// Has unexported fields.
}
```
A `Logger` consists of a `Handler`. Use `New` to create `Logger` with a
`Handler`, and the `Handler` method to retrieve it.
```
func New(h Handler) *Logger
New creates a new Logger with the given Handler.
func (l *Logger) Handler() Handler
Handler returns l's Handler.
```
There is a single, global default `Logger`.
It can be set and retrieved with the `SetDefault` and
`Default` functions.
```
func SetDefault(l *Logger)
SetDefault makes l the default Logger. After this call, output from the
log package's default Logger (as with log.Print, etc.) will be logged at
LevelInfo using l's Handler.
func Default() *Logger
Default returns the default Logger.
```
The `slog` package works to ensure consistent output with the `log` package.
Writing to `slog`'s default logger without setting a handler will write
structured text to `log`'s default logger.
Once a handler is set with `SetDefault`, as in the example above, the default
`log` logger will send its text output to the structured handler.
#### Output methods
`Logger`'s output methods produce log output by constructing a `Record` and
passing it to the `Logger`'s handler.
There are two output methods for each of four most common levels, one which
takes a context and one which doesn't. There is also a `Log` method
that takes any level, and a `LogAttrs` method that accepts only `Attr`s as an
optimization, both of which take a context.
These methods first call `Handler.Enabled` to see if they should proceed.
Each of these methods has a corresponding top-level function that uses the
default logger.
The context is passed to Handler.Enabled and Handler.Handle. Handlers sometimes
need to retrieve values from a context, tracing spans being a prime example.
We will provide a vet check for the methods that take a list of `any` arguments
to catch problems with missing keys or values.
```
func (l *Logger) Log(ctx context.Context, level Level, msg string, args ...any)
Log emits a log record with the current time and the given level and
message. The Record's Attrs consist of the Logger's attributes followed by
the Attrs specified by args.
The attribute arguments are processed as follows:
- If an argument is an Attr, it is used as is.
- If an argument is a string and this is not the last argument, the
following argument is treated as the value and the two are combined into
an Attr.
- Otherwise, the argument is treated as a value with key "!BADKEY".
func (l *Logger) LogAttrs(ctx context.Context, level Level, msg string, attrs ...Attr)
LogAttrs is a more efficient version of Logger.Log that accepts only Attrs.
func (l *Logger) Debug(msg string, args ...any)
Debug logs at LevelDebug.
func (l *Logger) Info(msg string, args ...any)
Info logs at LevelInfo.
func (l *Logger) Warn(msg string, args ...any)
Warn logs at LevelWarn.
func (l *Logger) Error(msg string, args ...any)
Error logs at LevelError.
func (l *Logger) DebugCtx(ctx context.Context, msg string, args ...any)
DebugCtx logs at LevelDebug with the given context.
func (l *Logger) InfoCtx(ctx context.Context, msg string, args ...any)
InfoCtx logs at LevelInfo with the given context.
func (l *Logger) WarnCtx(ctx context.Context, msg string, args ...any)
WarnCtx logs at LevelWarn with the given context.
func (l *Logger) ErrorCtx(ctx context.Context, msg string, args ...any)
ErrorCtx logs at LevelError with the given context.
```
Loggers can have attributes as well, added by the `With` method.
```
func (l *Logger) With(args ...any) *Logger
With returns a new Logger that includes the given arguments, converted to
Attrs as in Logger.Log. The Attrs will be added to each output from the
Logger.
The new Logger's handler is the result of calling WithAttrs on the
receiver's handler.
```
### Groups
Although most attribute values are simple types like strings and integers,
sometimes aggregate or composite values are desired.
For example, consider
```
type Name struct {
First, Last string
}
```
To handle values like this we include `GroupKind` for groups of Attrs.
To log a `Name` `n` as a group, we could write
```
slog.Info("message",
slog.Group("name",
slog.String("first", n.First),
slog.String("last", n.Last),
),
)
```
Handlers should qualify a group's members by its name.
What "qualify" means depends on the handler.
A handler that supports recursive data, like the
built-in `JSONHandler`, can use the group name as a key to a nested object:
```
"name": {"first": "Ren", "last": "Hoek"}
```
Handlers that use a flat output representation, like the built-in `TextHandler`,
could prefix the group member's keys with the group name.
This is `TextHandler`'s output:
```
name.first=Ren name.last=Hoek
```
If the author of the `Name` type wanted to arrange matters so that `Name`s
always logged in this way, they could implement the `LogValuer` interface discussed
[above](#the-logvaluer-interface):
```
func (n Name) LogValue() slog.Value {
return slog.GroupValue(
slog.String("first", n.First),
slog.String("last", n.Last),
)
}
```
Now, if `n` is a `Name`, the log line
```
slog.Info("message", "name", n)
```
will render exactly like the example with an explicit `slog.Group` above.
#### Logger groups
Sometimes it is useful to qualify all the attribute keys from a Logger.
For example, an application may be composed of multiple subsystems, some of
which may use the same attribute keys.
Qualifying each subsystem's keys is one way to avoid duplicates.
This can be done with `Logger.WithGroup`.
Duplicate keys can be avoided by handing each subsystem a `Logger` with a
different group.
```
func (l *Logger) WithGroup(name string) *Logger
WithGroup returns a new Logger that starts a group. The keys of all
attributes added to the Logger will be qualified by the given name.
```
### Levels
A Level is the importance or severity of a log event. The higher the level,
the more important or severe the event.
```
type Level int
```
The `slog` package provides names for common levels.
The level numbers below don't really matter too much. Any system can map
them to another numbering scheme if it wishes. We picked them to satisfy
three constraints.
First, we wanted the default level to be Info. Since Levels are ints,
Info is the default value for int, zero.
Second, we wanted to make it easy to work with verbosities instead of levels.
As discussed above,
some logging packages like [glog] and [Logr] use verbosities instead, where
a verbosity of 0 corresponds to the Info level and higher values represent less
important messages.
Negating a verbosity converts it into a Level. To use a verbosity of `v` with
this design, pass `-v` to `Log` or `LogAttrs`.
Third, we wanted some room between levels to accommodate schemes with
named levels between ours. For example, Google Cloud Logging defines a
Notice level between Info and Warn. Since there are only a few of these
intermediate levels, the gap between the numbers need not be large.
Our gap of 4 matches OpenTelemetry's mapping. Subtracting 9 from an
OpenTelemetry level in the DEBUG, INFO, WARN and ERROR ranges converts it to
the corresponding slog Level range. OpenTelemetry also has the names TRACE
and FATAL, which slog does not. But those OpenTelemetry levels can still be
represented as slog Levels by using the appropriate integers.
```
const (
LevelDebug Level = -4
LevelInfo Level = 0
LevelWarn Level = 4
LevelError Level = 8
)
```
The `Leveler` interface generalizes `Level`, so that a `Handler.Enabled`
implementation can vary its behavior. One way to get dynamic behavior
is to use `LevelVar`.
```
type Leveler interface {
Level() Level
}
A Leveler provides a Level value.
As Level itself implements Leveler, clients typically supply a Level value
wherever a Leveler is needed, such as in HandlerOptions. Clients who need to
vary the level dynamically can provide a more complex Leveler implementation
such as *LevelVar.
func (l Level) Level() Level
Level returns the receiver. It implements Leveler.
type LevelVar struct {
// Has unexported fields.
}
A LevelVar is a Level variable, to allow a Handler level to change
dynamically. It implements Leveler as well as a Set method, and it is safe
for use by multiple goroutines. The zero LevelVar corresponds to LevelInfo.
func (v *LevelVar) Level() Level
Level returns v's level.
func (v *LevelVar) Set(l Level)
Set sets v's level to l.
func (v *LevelVar) String() string
```
### Provided Handlers
The `slog` package includes two handlers, which behave similarly except for
their output format. `TextHandler` emits attributes as `KEY=VALUE`, and
`JSONHandler` writes line-delimited JSON objects.
Both can be configured using a `HandlerOptions`.
A zero `HandlerOptions` consists entirely of default values.
```
type HandlerOptions struct {
// When AddSource is true, the handler adds a ("source", "file:line")
// attribute to the output indicating the source code position of the log
// statement. AddSource is false by default to skip the cost of computing
// this information.
AddSource bool
// Level reports the minimum record level that will be logged.
// The handler discards records with lower levels.
// If Level is nil, the handler assumes LevelInfo.
// The handler calls Level.Level for each record processed;
// to adjust the minimum level dynamically, use a LevelVar.
Level Leveler
// ReplaceAttr is called to rewrite each non-group attribute before it is logged.
// The attribute's value has been resolved (see [Value.Resolve]).
// If ReplaceAttr returns an Attr with Key == "", the attribute is discarded.
//
// The built-in attributes with keys "time", "level", "source", and "msg"
// are passed to this function, except that time is omitted
// if zero, and source is omitted if AddSource is false.
//
// The first argument is a list of currently open groups that contain the
// Attr. It must not be retained or modified. ReplaceAttr is never called
// for Group attributes, only their contents. For example, the attribute
// list
//
// Int("a", 1), Group("g", Int("b", 2)), Int("c", 3)
//
// results in consecutive calls to ReplaceAttr with the following arguments:
//
// nil, Int("a", 1)
// []string{"g"}, Int("b", 2)
// nil, Int("c", 3)
//
// ReplaceAttr can be used to change the default keys of the built-in
// attributes, convert types (for example, to replace a `time.Time` with the
// integer seconds since the Unix epoch), sanitize personal information, or
// remove attributes from the output.
ReplaceAttr func(groups []string, a Attr) Attr
}
```
## Interoperating with Other Log Packages
As stated earlier, we expect that this package will interoperate with other log
packages.
One way that could happen is for another package's frontend to send
`slog.Record`s to a `slog.Handler`.
For instance, a `logr.LogSink` implementation could construct a `Record` from a
message and list of keys and values, and pass it to a `Handler`.
That is facilitated by `NewRecord`, `Record.Add` and `Record.AddAttrs`,
described above.
Another way for two log packages to work together is for the other package to
wrap its backend as a `slog.Handler`, so users could write code with the `slog`
package's API but connect the results to an existing `logr.LogSink`, for
example.
This involves writing a `slog.Handler` that wraps the other logger's backend.
Doing so doesn't seem to require any additional support from this package.
## Testing Package
To verify that a Handler's behavior matches the specification, we propose
a package testing/slogtest with one exported function:
```
// TestHandler tests a [slog.Handler].
// If TestHandler finds any misbehaviors, it returns an error for each,
// combined into a single error with errors.Join.
//
// TestHandler installs the given Handler in a [slog.Logger] and
// makes several calls to the Logger's output methods.
//
// The results function is invoked after all such calls.
// It should return a slice of map[string]any, one for each call to a Logger output method.
// The keys and values of the map should correspond to the keys and values of the Handler's
// output. Each group in the output should be represented as its own nested map[string]any.
//
// If the Handler outputs JSON, then calling [encoding/json.Unmarshal] with a `map[string]any`
// will create the right data structure.
func TestHandler(h slog.Handler, results func() []map[string]any) error
```
## Acknowledgements
Ian Cottrell's ideas about high-performance observability, captured in the
`golang.org/x/exp/event` package, informed a great deal of the design and
implementation of this proposal.
Seth Vargo’s ideas on logging were a source of motivation and inspiration. His
comments on an earlier draft helped improve the proposal.
Michael Knyszek explained how logging could work with runtime tracing.
Tim Hockin helped us understand logr's design choices, which led to significant
improvements.
Abhinav Gupta helped me understand Zap in depth, which informed the design.
Russ Cox provided valuable feedback and helped shape the final design.
Alan Donovan's CL reviews greatly improved the implementation.
The participants in the [GitHub
discussion](https://github.com/golang/go/discussions/54763) helped us confirm we
were on the right track, and called our attention to important features we had
overlooked (and have since added).
[zerolog]: https://pkg.go.dev/github.com/rs/zerolog
[Zerolog]: https://pkg.go.dev/github.com/rs/zerolog
[logfmt]: https://pkg.go.dev/github.com/kr/logfmt
[zap]: https://pkg.go.dev/go.uber.org/zap
[logr]: https://pkg.go.dev/github.com/go-logr/logr
[Logr]: https://pkg.go.dev/github.com/go-logr/logr
[hclog]: https://pkg.go.dev/github.com/hashicorp/go-hclog
[glog]: https://pkg.go.dev/github.com/golang/glog
## Appendix: API
```
package slog
Package slog provides structured logging, in which log records include a
message, a severity level, and various other attributes expressed as key-value
pairs.
It defines a type, Logger, which provides several methods (such as Logger.Info
and Logger.Error) for reporting events of interest.
Each Logger is associated with a Handler. A Logger output method creates a
Record from the method arguments and passes it to the Handler, which decides how
to handle it. There is a default Logger accessible through top-level functions
(such as Info and Error) that call the corresponding Logger methods.
A log record consists of a time, a level, a message, and a set of key-value
pairs, where the keys are strings and the values may be of any type. As an
example,
slog.Info("hello", "count", 3)
creates a record containing the time of the call, a level of Info, the message
"hello", and a single pair with key "count" and value 3.
The Info top-level function calls the Logger.Info method on the default Logger.
In addition to Logger.Info, there are methods for Debug, Warn and Error levels.
Besides these convenience methods for common levels, there is also a Logger.Log
method which takes the level as an argument. Each of these methods has a
corresponding top-level function that uses the default logger.
The default handler formats the log record's message, time, level, and
attributes as a string and passes it to the log package.
2022/11/08 15:28:26 INFO hello count=3
For more control over the output format, create a logger with a different
handler. This statement uses New to create a new logger with a TextHandler that
writes structured records in text form to standard error:
logger := slog.New(slog.NewTextHandler(os.Stderr))
TextHandler output is a sequence of key=value pairs, easily and unambiguously
parsed by machine. This statement:
logger.Info("hello", "count", 3)
produces this output:
time=2022-11-08T15:28:26.000-05:00 level=INFO msg=hello count=3
The package also provides JSONHandler, whose output is line-delimited JSON:
logger := slog.New(slog.NewJSONHandler(os.Stdout))
logger.Info("hello", "count", 3)
produces this output:
{"time":"2022-11-08T15:28:26.000000000-05:00","level":"INFO","msg":"hello","count":3}
Both TextHandler and JSONHandler can be configured with a HandlerOptions.
There are options for setting the minimum level (see Levels, below), displaying
the source file and line of the log call, and modifying attributes before they
are logged.
Setting a logger as the default with
slog.SetDefault(logger)
will cause the top-level functions like Info to use it. SetDefault also updates
the default logger used by the log package, so that existing applications that
use log.Printf and related functions will send log records to the logger's
handler without needing to be rewritten.
# Attrs and Values
An Attr is a key-value pair. The Logger output methods accept Attrs as well as
alternating keys and values. The statement
slog.Info("hello", slog.Int("count", 3))
behaves the same as
slog.Info("hello", "count", 3)
There are convenience constructors for Attr such as Int, String, and Bool for
common types, as well as the function Any for constructing Attrs of any type.
The value part of an Attr is a type called Value. Like an [any], a Value can
hold any Go value, but it can represent typical values, including all numbers
and strings, without an allocation.
For the most efficient log output, use Logger.LogAttrs. It is similar to
Logger.Log but accepts only Attrs, not alternating keys and values; this allows
it, too, to avoid allocation.
The call
logger.LogAttrs(nil, slog.LevelInfo, "hello", slog.Int("count", 3))
is the most efficient way to achieve the same output as
slog.Info("hello", "count", 3)
Some attributes are common to many log calls. For example, you may wish to
include the URL or trace identifier of a server request with all log events
arising from the request. Rather than repeat the attribute with every log call,
you can use Logger.With to construct a new Logger containing the attributes:
logger2 := logger.With("url", r.URL)
The arguments to With are the same key-value pairs used in Logger.Info.
The result is a new Logger with the same handler as the original, but additional
attributes that will appear in the output of every call.
# Levels
A Level is an integer representing the importance or severity of a log event.
The higher the level, the more severe the event. This package defines constants
for the most common levels, but any int can be used as a level.
In an application, you may wish to log messages only at a certain level or
greater. One common configuration is to log messages at Info or higher levels,
suppressing debug logging until it is needed. The built-in handlers can be
configured with the minimum level to output by setting [HandlerOptions.Level].
The program's `main` function typically does this. The default value is
LevelInfo.
Setting the [HandlerOptions.Level] field to a Level value fixes the handler's
minimum level throughout its lifetime. Setting it to a LevelVar allows the level
to be varied dynamically. A LevelVar holds a Level and is safe to read or write
from multiple goroutines. To vary the level dynamically for an entire program,
first initialize a global LevelVar:
var programLevel = new(slog.LevelVar) // Info by default
Then use the LevelVar to construct a handler, and make it the default:
h := slog.HandlerOptions{Level: programLevel}.NewJSONHandler(os.Stderr)
slog.SetDefault(slog.New(h))
Now the program can change its logging level with a single statement:
programLevel.Set(slog.LevelDebug)
# Groups
Attributes can be collected into groups. A group has a name that is used to
qualify the names of its attributes. How this qualification is displayed depends
on the handler. TextHandler separates the group and attribute names with a dot.
JSONHandler treats each group as a separate JSON object, with the group name as
the key.
Use Group to create a Group Attr from a name and a list of Attrs:
slog.Group("request",
slog.String("method", r.Method),
slog.Any("url", r.URL))
TextHandler would display this group as
request.method=GET request.url=http://example.com
JSONHandler would display it as
"request":{"method":"GET","url":"http://example.com"}
Use Logger.WithGroup to qualify all of a Logger's output with a group name.
Calling WithGroup on a Logger results in a new Logger with the same Handler as
the original, but with all its attributes qualified by the group name.
This can help prevent duplicate attribute keys in large systems, where
subsystems might use the same keys. Pass each subsystem a different Logger with
its own group name so that potential duplicates are qualified:
logger := slog.Default().With("id", systemID)
parserLogger := logger.WithGroup("parser")
parseInput(input, parserLogger)
When parseInput logs with parserLogger, its keys will be qualified with
"parser", so even if it uses the common key "id", the log line will have
distinct keys.
# Contexts
Some handlers may wish to include information from the context.Context that is
available at the call site. One example of such information is the identifier
for the current span when tracing is is enabled.
The Logger.Log and Logger.LogAttrs methods take a context as a first argument,
as do their corresponding top-level functions.
Although the convenience methods on Logger (Info and so on) and the
corresponding top-level functions do not take a context, the alternatives ending
in "Ctx" do. For example,
slog.InfoCtx(ctx, "message")
It is recommended to pass a context to an output method if one is available.
# Advanced topics
## Customizing a type's logging behavior
If a type implements the LogValuer interface, the Value returned from its
LogValue method is used for logging. You can use this to control how values
of the type appear in logs. For example, you can redact secret information
like passwords, or gather a struct's fields in a Group. See the examples under
LogValuer for details.
A LogValue method may return a Value that itself implements LogValuer. The
Value.Resolve method handles these cases carefully, avoiding infinite loops and
unbounded recursion. Handler authors and others may wish to use Value.Resolve
instead of calling LogValue directly.
## Wrapping output methods
The logger functions use reflection over the call stack to find the file name
and line number of the logging call within the application. This can produce
incorrect source information for functions that wrap slog. For instance,
if you define this function in file mylog.go:
func Infof(format string, args ...any) {
slog.Default().Info(fmt.Sprintf(format, args...))
}
and you call it like this in main.go:
Infof(slog.Default(), "hello, %s", "world")
then slog will report the source file as mylog.go, not main.go.
A correct implementation of Infof will obtain the source location (pc) and
pass it to NewRecord. The Infof function in the package-level example called
"wrapping" demonstrates how to do this.
## Working with Records
Sometimes a Handler will need to modify a Record before passing it on to another
Handler or backend. A Record contains a mixture of simple public fields (e.g.
Time, Level, Message) and hidden fields that refer to state (such as attributes)
indirectly. This means that modifying a simple copy of a Record (e.g. by calling
Record.Add or Record.AddAttrs to add attributes) may have unexpected effects
on the original. Before modifying a Record, use [Clone] to create a copy that
shares no state with the original, or create a new Record with NewRecord and
build up its Attrs by traversing the old ones with Record.Attrs.
## Performance considerations
If profiling your application demonstrates that logging is taking significant
time, the following suggestions may help.
If many log lines have a common attribute, use Logger.With to create a Logger
with that attribute. The built-in handlers will format that attribute only once,
at the call to Logger.With. The Handler interface is designed to allow that
optimization, and a well-written Handler should take advantage of it.
The arguments to a log call are always evaluated, even if the log event is
discarded. If possible, defer computation so that it happens only if the value
is actually logged. For example, consider the call
slog.Info("starting request", "url", r.URL.String()) // may compute String unnecessarily
The URL.String method will be called even if the logger discards Info-level
events. Instead, pass the URL directly:
slog.Info("starting request", "url", &r.URL) // calls URL.String only if needed
The built-in TextHandler will call its String method, but only if the log event
is enabled. Avoiding the call to String also preserves the structure of the
underlying value. For example JSONHandler emits the components of the parsed
URL as a JSON object. If you want to avoid eagerly paying the cost of the String
call without causing the handler to potentially inspect the structure of the
value, wrap the value in a fmt.Stringer implementation that hides its Marshal
methods.
You can also use the LogValuer interface to avoid unnecessary work in disabled
log calls. Say you need to log some expensive value:
slog.Debug("frobbing", "value", computeExpensiveValue(arg))
Even if this line is disabled, computeExpensiveValue will be called. To avoid
that, define a type implementing LogValuer:
type expensive struct { arg int }
func (e expensive) LogValue() slog.Value {
return slog.AnyValue(computeExpensiveValue(e.arg))
}
Then use a value of that type in log calls:
slog.Debug("frobbing", "value", expensive{arg})
Now computeExpensiveValue will only be called when the line is enabled.
The built-in handlers acquire a lock before calling io.Writer.Write to ensure
that each record is written in one piece. User-defined handlers are responsible
for their own locking.
CONSTANTS
const (
// TimeKey is the key used by the built-in handlers for the time
// when the log method is called. The associated Value is a [time.Time].
TimeKey = "time"
// LevelKey is the key used by the built-in handlers for the level
// of the log call. The associated value is a [Level].
LevelKey = "level"
// MessageKey is the key used by the built-in handlers for the
// message of the log call. The associated value is a string.
MessageKey = "msg"
// SourceKey is the key used by the built-in handlers for the source file
// and line of the log call. The associated value is a string.
SourceKey = "source"
)
Keys for "built-in" attributes.
FUNCTIONS
func Debug(msg string, args ...any)
Debug calls Logger.Debug on the default logger.
func DebugCtx(ctx context.Context, msg string, args ...any)
DebugCtx calls Logger.DebugCtx on the default logger.
func Error(msg string, args ...any)
Error calls Logger.Error on the default logger.
func ErrorCtx(ctx context.Context, msg string, args ...any)
ErrorCtx calls Logger.ErrorCtx on the default logger.
func Info(msg string, args ...any)
Info calls Logger.Info on the default logger.
func InfoCtx(ctx context.Context, msg string, args ...any)
InfoCtx calls Logger.InfoCtx on the default logger.
func Log(ctx context.Context, level Level, msg string, args ...any)
Log calls Logger.Log on the default logger.
func LogAttrs(ctx context.Context, level Level, msg string, attrs ...Attr)
LogAttrs calls Logger.LogAttrs on the default logger.
func NewLogLogger(h Handler, level Level) *log.Logger
NewLogLogger returns a new log.Logger such that each call to its Output
method dispatches a Record to the specified handler. The logger acts as a
bridge from the older log API to newer structured logging handlers.
func SetDefault(l *Logger)
SetDefault makes l the default Logger. After this call, output from the
log package's default Logger (as with log.Print, etc.) will be logged at
LevelInfo using l's Handler.
func Warn(msg string, args ...any)
Warn calls Logger.Warn on the default logger.
func WarnCtx(ctx context.Context, msg string, args ...any)
WarnCtx calls Logger.WarnCtx on the default logger.
TYPES
type Attr struct {
Key string
Value Value
}
An Attr is a key-value pair.
func Any(key string, value any) Attr
Any returns an Attr for the supplied value. See [Value.AnyValue] for how
values are treated.
func Bool(key string, v bool) Attr
Bool returns an Attr for a bool.
func Duration(key string, v time.Duration) Attr
Duration returns an Attr for a time.Duration.
func Float64(key string, v float64) Attr
Float64 returns an Attr for a floating-point number.
func Group(key string, as ...Attr) Attr
Group returns an Attr for a Group Value. The caller must not subsequently
mutate the argument slice.
Use Group to collect several Attrs under a single key on a log line, or as
the result of LogValue in order to log a single value as multiple Attrs.
func Int(key string, value int) Attr
Int converts an int to an int64 and returns an Attr with that value.
func Int64(key string, value int64) Attr
Int64 returns an Attr for an int64.
func String(key, value string) Attr
String returns an Attr for a string value.
func Time(key string, v time.Time) Attr
Time returns an Attr for a time.Time. It discards the monotonic portion.
func Uint64(key string, v uint64) Attr
Uint64 returns an Attr for a uint64.
func (a Attr) Equal(b Attr) bool
Equal reports whether a and b have equal keys and values.
func (a Attr) String() string
type Handler interface {
// Enabled reports whether the handler handles records at the given level.
// The handler ignores records whose level is lower.
// It is called early, before any arguments are processed,
// to save effort if the log event should be discarded.
// If called from a Logger method, the first argument is the context
// passed to that method, or context.Background() if nil was passed
// or the method does not take a context.
// The context is passed so Enabled can use its values
// to make a decision.
Enabled(context.Context, Level) bool
// Handle handles the Record.
// It will only be called Enabled returns true.
// The Context argument is as for Enabled.
// It is present solely to provide Handlers access to the context's values.
// Canceling the context should not affect record processing.
// (Among other things, log messages may be necessary to debug a
// cancellation-related problem.)
//
// Handle methods that produce output should observe the following rules:
// - If r.Time is the zero time, ignore the time.
// - If r.PC is zero, ignore it.
// - If an Attr's key is the empty string and the value is not a group,
// ignore the Attr.
// - If a group's key is empty, inline the group's Attrs.
// - If a group has no Attrs (even if it has a non-empty key),
// ignore it.
Handle(context.Context, Record) error
// WithAttrs returns a new Handler whose attributes consist of
// both the receiver's attributes and the arguments.
// The Handler owns the slice: it may retain, modify or discard it.
// [Logger.With] will resolve the Attrs.
WithAttrs(attrs []Attr) Handler
// WithGroup returns a new Handler with the given group appended to
// the receiver's existing groups.
// The keys of all subsequent attributes, whether added by With or in a
// Record, should be qualified by the sequence of group names.
//
// How this qualification happens is up to the Handler, so long as
// this Handler's attribute keys differ from those of another Handler
// with a different sequence of group names.
//
// A Handler should treat WithGroup as starting a Group of Attrs that ends
// at the end of the log event. That is,
//
// logger.WithGroup("s").LogAttrs(level, msg, slog.Int("a", 1), slog.Int("b", 2))
//
// should behave like
//
// logger.LogAttrs(level, msg, slog.Group("s", slog.Int("a", 1), slog.Int("b", 2)))
//
// If the name is empty, WithGroup returns the receiver.
WithGroup(name string) Handler
}
A Handler handles log records produced by a Logger..
A typical handler may print log records to standard error, or write them to
a file or database, or perhaps augment them with additional attributes and
pass them on to another handler.
Any of the Handler's methods may be called concurrently with itself or
with other methods. It is the responsibility of the Handler to manage this
concurrency.
Users of the slog package should not invoke Handler methods directly.
They should use the methods of Logger instead.
type HandlerOptions struct {
// When AddSource is true, the handler adds a ("source", "file:line")
// attribute to the output indicating the source code position of the log
// statement. AddSource is false by default to skip the cost of computing
// this information.
AddSource bool
// Level reports the minimum record level that will be logged.
// The handler discards records with lower levels.
// If Level is nil, the handler assumes LevelInfo.
// The handler calls Level.Level for each record processed;
// to adjust the minimum level dynamically, use a LevelVar.
Level Leveler
// ReplaceAttr is called to rewrite each non-group attribute before it is logged.
// The attribute's value has been resolved (see [Value.Resolve]).
// If ReplaceAttr returns an Attr with Key == "", the attribute is discarded.
//
// The built-in attributes with keys "time", "level", "source", and "msg"
// are passed to this function, except that time is omitted
// if zero, and source is omitted if AddSource is false.
//
// The first argument is a list of currently open groups that contain the
// Attr. It must not be retained or modified. ReplaceAttr is never called
// for Group attributes, only their contents. For example, the attribute
// list
//
// Int("a", 1), Group("g", Int("b", 2)), Int("c", 3)
//
// results in consecutive calls to ReplaceAttr with the following arguments:
//
// nil, Int("a", 1)
// []string{"g"}, Int("b", 2)
// nil, Int("c", 3)
//
// ReplaceAttr can be used to change the default keys of the built-in
// attributes, convert types (for example, to replace a `time.Time` with the
// integer seconds since the Unix epoch), sanitize personal information, or
// remove attributes from the output.
ReplaceAttr func(groups []string, a Attr) Attr
}
HandlerOptions are options for a TextHandler or JSONHandler. A zero
HandlerOptions consists entirely of default values.
func (opts HandlerOptions) NewJSONHandler(w io.Writer) *JSONHandler
NewJSONHandler creates a JSONHandler with the given options that writes to
w.
func (opts HandlerOptions) NewTextHandler(w io.Writer) *TextHandler
NewTextHandler creates a TextHandler with the given options that writes to
w.
type JSONHandler struct {
// Has unexported fields.
}
JSONHandler is a Handler that writes Records to an io.Writer as
line-delimited JSON objects.
func NewJSONHandler(w io.Writer) *JSONHandler
NewJSONHandler creates a JSONHandler that writes to w, using the default
options.
func (h *JSONHandler) Enabled(_ context.Context, level Level) bool
Enabled reports whether the handler handles records at the given level.
The handler ignores records whose level is lower.
func (h *JSONHandler) Handle(_ context.Context, r Record) error
Handle formats its argument Record as a JSON object on a single line.
If the Record's time is zero, the time is omitted. Otherwise, the key is
"time" and the value is output as with json.Marshal.
If the Record's level is zero, the level is omitted. Otherwise, the key is
"level" and the value of Level.String is output.
If the AddSource option is set and source information is available, the key
is "source" and the value is output as "FILE:LINE".
The message's key is "msg".
To modify these or other attributes, or remove them from the output,
use [HandlerOptions.ReplaceAttr].
Values are formatted as with encoding/json.Marshal, with the following
exceptions:
- Floating-point NaNs and infinities are formatted as one of the strings
"NaN", "+Inf" or "-Inf".
- Levels are formatted as with Level.String.
- HTML characters are not escaped.
Each call to Handle results in a single serialized call to io.Writer.Write.
func (h *JSONHandler) WithAttrs(attrs []Attr) Handler
WithAttrs returns a new JSONHandler whose attributes consists of h's
attributes followed by attrs.
func (h *JSONHandler) WithGroup(name string) Handler
type Kind int
Kind is the kind of a Value.
const (
KindAny Kind = iota
KindBool
KindDuration
KindFloat64
KindInt64
KindString
KindTime
KindUint64
KindGroup
KindLogValuer
)
func (k Kind) String() string
type Level int
A Level is the importance or severity of a log event. The higher the level,
the more important or severe the event.
const (
LevelDebug Level = -4
LevelInfo Level = 0
LevelWarn Level = 4
LevelError Level = 8
)
Second, we wanted to make it easy to use levels to specify logger verbosity.
Since a larger level means a more severe event, a logger that accepts events
with smaller (or more negative) level means a more verbose logger. Logger
verbosity is thus the negation of event severity, and the default verbosity
of 0 accepts all events at least as severe as INFO.
Third, we wanted some room between levels to accommodate schemes with
named levels between ours. For example, Google Cloud Logging defines a
Notice level between Info and Warn. Since there are only a few of these
intermediate levels, the gap between the numbers need not be large.
Our gap of 4 matches OpenTelemetry's mapping. Subtracting 9 from an
OpenTelemetry level in the DEBUG, INFO, WARN and ERROR ranges converts it to
the corresponding slog Level range. OpenTelemetry also has the names TRACE
and FATAL, which slog does not. But those OpenTelemetry levels can still be
represented as slog Levels by using the appropriate integers.
Names for common levels.
func (l Level) Level() Level
Level returns the receiver. It implements Leveler.
func (l Level) MarshalJSON() ([]byte, error)
MarshalJSON implements encoding/json.Marshaler by quoting the output of
Level.String.
func (l Level) MarshalText() ([]byte, error)
MarshalText implements encoding.TextMarshaler by calling Level.String.
func (l Level) String() string
String returns a name for the level. If the level has a name, then that
name in uppercase is returned. If the level is between named values, then an
integer is appended to the uppercased name. Examples:
LevelWarn.String() => "WARN"
(LevelInfo+2).String() => "INFO+2"
func (l *Level) UnmarshalJSON(data []byte) error
UnmarshalJSON implements encoding/json.Unmarshaler It accepts any string
produced by Level.MarshalJSON, ignoring case. It also accepts numeric
offsets that would result in a different string on output. For example,
"Error-8" would marshal as "INFO".
func (l *Level) UnmarshalText(data []byte) error
UnmarshalText implements encoding.TextUnmarshaler. It accepts any string
produced by Level.MarshalText, ignoring case. It also accepts numeric
offsets that would result in a different string on output. For example,
"Error-8" would marshal as "INFO".
type LevelVar struct {
// Has unexported fields.
}
A LevelVar is a Level variable, to allow a Handler level to change
dynamically. It implements Leveler as well as a Set method, and it is safe
for use by multiple goroutines. The zero LevelVar corresponds to LevelInfo.
func (v *LevelVar) Level() Level
Level returns v's level.
func (v *LevelVar) MarshalText() ([]byte, error)
MarshalText implements encoding.TextMarshaler by calling Level.MarshalText.
func (v *LevelVar) Set(l Level)
Set sets v's level to l.
func (v *LevelVar) String() string
func (v *LevelVar) UnmarshalText(data []byte) error
UnmarshalText implements encoding.TextUnmarshaler by calling
Level.UnmarshalText.
type Leveler interface {
Level() Level
}
A Leveler provides a Level value.
As Level itself implements Leveler, clients typically supply a Level value
wherever a Leveler is needed, such as in HandlerOptions. Clients who need to
vary the level dynamically can provide a more complex Leveler implementation
such as *LevelVar.
type LogValuer interface {
LogValue() Value
}
A LogValuer is any Go value that can convert itself into a Value for
logging.
This mechanism may be used to defer expensive operations until they are
needed, or to expand a single value into a sequence of components.
type Logger struct {
// Has unexported fields.
}
A Logger records structured information about each call to its Log, Debug,
Info, Warn, and Error methods. For each call, it creates a Record and passes
it to a Handler.
To create a new Logger, call New or a Logger method that begins "With".
func Default() *Logger
Default returns the default Logger.
func New(h Handler) *Logger
New creates a new Logger with the given non-nil Handler and a nil context.
func With(args ...any) *Logger
With calls Logger.With on the default logger.
func (l *Logger) Debug(msg string, args ...any)
Debug logs at LevelDebug.
func (l *Logger) DebugCtx(ctx context.Context, msg string, args ...any)
DebugCtx logs at LevelDebug with the given context.
func (l *Logger) Enabled(ctx context.Context, level Level) bool
Enabled reports whether l emits log records at the given context and level.
func (l *Logger) Error(msg string, args ...any)
Error logs at LevelError.
func (l *Logger) ErrorCtx(ctx context.Context, msg string, args ...any)
ErrorCtx logs at LevelError with the given context.
func (l *Logger) Handler() Handler
Handler returns l's Handler.
func (l *Logger) Info(msg string, args ...any)
Info logs at LevelInfo.
func (l *Logger) InfoCtx(ctx context.Context, msg string, args ...any)
InfoCtx logs at LevelInfo with the given context.
func (l *Logger) Log(ctx context.Context, level Level, msg string, args ...any)
Log emits a log record with the current time and the given level and
message. The Record's Attrs consist of the Logger's attributes followed by
the Attrs specified by args.
The attribute arguments are processed as follows:
- If an argument is an Attr, it is used as is.
- If an argument is a string and this is not the last argument, the
following argument is treated as the value and the two are combined into
an Attr.
- Otherwise, the argument is treated as a value with key "!BADKEY".
func (l *Logger) LogAttrs(ctx context.Context, level Level, msg string, attrs ...Attr)
LogAttrs is a more efficient version of Logger.Log that accepts only Attrs.
func (l *Logger) Warn(msg string, args ...any)
Warn logs at LevelWarn.
func (l *Logger) WarnCtx(ctx context.Context, msg string, args ...any)
WarnCtx logs at LevelWarn with the given context.
func (l *Logger) With(args ...any) *Logger
With returns a new Logger that includes the given arguments, converted
to Attrs as in Logger.Log and resolved. The Attrs will be added to each
output from the Logger. The new Logger shares the old Logger's context. The
new Logger's handler is the result of calling WithAttrs on the receiver's
handler.
func (l *Logger) WithGroup(name string) *Logger
WithGroup returns a new Logger that starts a group. The keys of all
attributes added to the Logger will be qualified by the given name. The new
Logger shares the old Logger's context.
The new Logger's handler is the result of calling WithGroup on the
receiver's handler.
type Record struct {
// The time at which the output method (Log, Info, etc.) was called.
Time time.Time
// The log message.
Message string
// The level of the event.
Level Level
// The program counter at the time the record was constructed, as determined
// by runtime.Callers. If zero, no program counter is available.
//
// The only valid use for this value is as an argument to
// [runtime.CallersFrames]. In particular, it must not be passed to
// [runtime.FuncForPC].
PC uintptr
// Has unexported fields.
}
A Record holds information about a log event. Copies of a Record share
state. Do not modify a Record after handing out a copy to it. Use
Record.Clone to create a copy with no shared state.
func NewRecord(t time.Time, level Level, msg string, pc uintptr) Record
NewRecord creates a Record from the given arguments. Use Record.AddAttrs to
add attributes to the Record.
NewRecord is intended for logging APIs that want to support a Handler as a
backend.
func (r *Record) Add(args ...any)
Add converts the args to Attrs as described in Logger.Log, then appends the
Attrs to the Record's list of Attrs. It resolves the Attrs before doing so.
func (r *Record) AddAttrs(attrs ...Attr)
AddAttrs appends the given Attrs to the Record's list of Attrs. It resolves
the Attrs before doing so.
func (r Record) Attrs(f func(Attr))
Attrs calls f on each Attr in the Record. The Attrs are already resolved.
func (r Record) Clone() Record
Clone returns a copy of the record with no shared state. The original record
and the clone can both be modified without interfering with each other.
func (r Record) NumAttrs() int
NumAttrs returns the number of attributes in the Record.
type TextHandler struct {
// Has unexported fields.
}
TextHandler is a Handler that writes Records to an io.Writer as a sequence
of key=value pairs separated by spaces and followed by a newline.
func NewTextHandler(w io.Writer) *TextHandler
NewTextHandler creates a TextHandler that writes to w, using the default
options.
func (h *TextHandler) Enabled(_ context.Context, level Level) bool
Enabled reports whether the handler handles records at the given level.
The handler ignores records whose level is lower.
func (h *TextHandler) Handle(_ context.Context, r Record) error
Handle formats its argument Record as a single line of space-separated
key=value items.
If the Record's time is zero, the time is omitted. Otherwise, the key is
"time" and the value is output in RFC3339 format with millisecond precision.
If the Record's level is zero, the level is omitted. Otherwise, the key is
"level" and the value of Level.String is output.
If the AddSource option is set and source information is available, the key
is "source" and the value is output as FILE:LINE.
The message's key "msg".
To modify these or other attributes, or remove them from the output,
use [HandlerOptions.ReplaceAttr].
If a value implements encoding.TextMarshaler, the result of MarshalText is
written. Otherwise, the result of fmt.Sprint is written.
Keys and values are quoted with strconv.Quote if they contain Unicode space
characters, non-printing characters, '"' or '='.
Keys inside groups consist of components (keys or group names) separated by
dots. No further escaping is performed. If it is necessary to reconstruct
the group structure of a key even in the presence of dots inside components,
use [HandlerOptions.ReplaceAttr] to escape the keys.
Each call to Handle results in a single serialized call to io.Writer.Write.
func (h *TextHandler) WithAttrs(attrs []Attr) Handler
WithAttrs returns a new TextHandler whose attributes consists of h's
attributes followed by attrs.
func (h *TextHandler) WithGroup(name string) Handler
type Value struct {
// Has unexported fields.
}
A Value can represent any Go value, but unlike type any, it can represent
most small values without an allocation. The zero Value corresponds to nil.
func AnyValue(v any) Value
AnyValue returns a Value for the supplied value.
If the supplied value is of type Value, it is returned unmodified.
Given a value of one of Go's predeclared string, bool, or (non-complex)
numeric types, AnyValue returns a Value of kind String, Bool, Uint64, Int64,
or Float64. The width of the original numeric type is not preserved.
Given a time.Time or time.Duration value, AnyValue returns a Value of kind
KindTime or KindDuration. The monotonic time is not preserved.
For nil, or values of all other types, including named types whose
underlying type is numeric, AnyValue returns a value of kind KindAny.
func BoolValue(v bool) Value
BoolValue returns a Value for a bool.
func DurationValue(v time.Duration) Value
DurationValue returns a Value for a time.Duration.
func Float64Value(v float64) Value
Float64Value returns a Value for a floating-point number.
func GroupValue(as ...Attr) Value
GroupValue returns a new Value for a list of Attrs. The caller must not
subsequently mutate the argument slice.
func Int64Value(v int64) Value
Int64Value returns a Value for an int64.
func IntValue(v int) Value
IntValue returns a Value for an int.
func StringValue(value string) Value
StringValue returns a new Value for a string.
func TimeValue(v time.Time) Value
TimeValue returns a Value for a time.Time. It discards the monotonic
portion.
func Uint64Value(v uint64) Value
Uint64Value returns a Value for a uint64.
func (v Value) Any() any
Any returns v's value as an any.
func (v Value) Bool() bool
Bool returns v's value as a bool. It panics if v is not a bool.
func (a Value) Duration() time.Duration
Duration returns v's value as a time.Duration. It panics if v is not a
time.Duration.
func (v Value) Equal(w Value) bool
Equal reports whether v and w have equal keys and values.
func (v Value) Float64() float64
Float64 returns v's value as a float64. It panics if v is not a float64.
func (v Value) Group() []Attr
Group returns v's value as a []Attr. It panics if v's Kind is not KindGroup.
func (v Value) Int64() int64
Int64 returns v's value as an int64. It panics if v is not a signed integer.
func (v Value) Kind() Kind
Kind returns v's Kind.
func (v Value) LogValuer() LogValuer
LogValuer returns v's value as a LogValuer. It panics if v is not a
LogValuer.
func (v Value) Resolve() Value
Resolve repeatedly calls LogValue on v while it implements LogValuer, and
returns the result. If v resolves to a group, the group's attributes' values
are also resolved. If the number of LogValue calls exceeds a threshold, a
Value containing an error is returned. Resolve's return value is guaranteed
not to be of Kind KindLogValuer.
func (v Value) String() string
String returns Value's value as a string, formatted like fmt.Sprint.
Unlike the methods Int64, Float64, and so on, which panic if v is of the
wrong kind, String never panics.
func (v Value) Time() time.Time
Time returns v's value as a time.Time. It panics if v is not a time.Time.
func (v Value) Uint64() uint64
Uint64 returns v's value as a uint64. It panics if v is not an unsigned
integer.
package slogtest
FUNCTIONS
func TestHandler(h slog.Handler, results func() []map[string]any) error
TestHandler tests a slog.Handler. If TestHandler finds any misbehaviors,
it returns an error for each, combined into a single error with errors.Join.
TestHandler installs the given Handler in a slog.Logger and makes several
calls to the Logger's output methods.
The results function is invoked after all such calls. It should return
a slice of map[string]any, one for each call to a Logger output method.
The keys and values of the map should correspond to the keys and values of
the Handler's output. Each group in the output should be represented as its
own nested map[string]any.
If the Handler outputs JSON, then calling encoding/json.Unmarshal with a
`map[string]any` will create the right data structure.
```
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/22080-dwarf-inlining.md | # Proposal: emit DWARF inlining info in the Go compiler
Author(s): Than McIntosh
Last updated: 2017-10-23
Discussion at: https://golang.org/issue/22080
# Abstract
In Go 1.9, the inliner was enhanced to support mid-stack inlining, including
tracking of inlines in the PC-value table to enable accurate tracebacks (see
[proposal](https://golang.org/issue/19348)).
The mid-stack inlining proposal included plans to enhance DWARF generation to
emit inlining records, however the DWARF support has yet to be implemented.
This document outlines a proposal for completing this work.
# Background
This section discusses previous work done on the compiler related to inlining
and related to debug info generation, and outlines the what we want to see in
terms of generated DWARF.
### Source position tracking
As part of the mid-stack inlining work, the Go compiler's source position
tracking was enhanced, giving it the ability to capture the inlined call stack
for an instruction created during an inlining operation.
This additional source position information is then used to create an
inline-aware PC-value table (readable by the runtime) to provide accurate
tracebacks, but is not yet being used to emit DWARF inlining records.
### Lexical scopes
The Go compiler also incorporates support for emitting DWARF lexical scope
records, so as to provide information to the debugger on which instance of a
given variable name is in scope at a given program point.
This feature is currently only operational when the user is compiling with "-l
-N" passed via -gcflags; these options disable inlining and turn off most
optimizations.
The scoping implementation currently relies on disabling the inliner; to enable
scope generation in combination with inlining would require a separate effort.
### Enhanced variable location tracking
There is also work being done to enable more accurate DWARF location lists for
function parameters and local variables.
This better value tracking is currently checked in but not enabled by default,
however the hope is to make this the default behavior for all compilations.
### Compressed source positions, updates during inlining
The compiler currently uses a compressed representation for source position
information.
AST nodes and SSA names incorporate a compact
[`src.XPos`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/internal/src/xpos.go#L11)
object of the form
```
type XPos struct {
index int32 // index into table of PosBase objects
lico
}
```
where
[`src.PosBase`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/internal/src/pos.go#L130)
contains source file info and a line base:
```
type PosBase struct {
pos Pos
filename string // file name used to open source file, for error messages
absFilename string // absolute file name, for PC-Line tables
symFilename string // cached symbol file name
line uint // relative line number at pos
inl int // inlining index (see cmd/internal/obj/inl.go)
}
```
In the struct above, `inl` is an index into the global inlining
tree (maintained as a global slice of
[`obj.InlinedCall`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/internal/obj/inl.go#L46)
objects):
```
// InlinedCall is a node in an InlTree.
type InlinedCall struct {
Parent int // index of parent in InlTree or -1 if outermost call
Pos src.XPos // position of the inlined call
Func *LSym // function that was inlined
}
```
When the inliner replaces a call with the body of an inlinable procedure, it
creates a new `inl.InlinedCall` object based on the call, then a new
`src.PosBase` referring to the InlinedCall's index in the global tree.
It then rewrites/updates the src.XPos objects in the inlined blob to refer to
the new `src.PosBase` (this process is described in more detail in the
[mid-stack inlining design
document](https://golang.org/design/19348-midstack-inlining)).
### Overall existing framework for debug generation
DWARF generation is split between the Go compiler and Go linker; the top-level
driver routine for debug generation is
[`obj.populateDWARF`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/internal/obj/objfile.go#L485).
This routine makes a call back into
[`gc.debuginfo`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/compile/internal/gc/pgen.go#L304)
(via context pointer), which collects information on variables and scopes for a
function, then invokes
[`dwarf.PutFunc`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/internal/dwarf/dwarf.go#L687)
to create what amounts to an abstract version of the DWARF DIE chain for the
function itself and its children (formals, variables, scopes).
The linker starts with the skeleton DIE tree emitted by the compiler, then uses
it as a guide to emit the actual DWARF .debug_info section.
Other DWARF sections (`.debug_line`, `.debug_frame`) are emitted as well
based on non-DWARF-specific data structures (for example, the PCLN table).
### Mechanisms provided by the DWARF standard for representing inlining info
The DWARF specification provides details on how compilers can capture and
encapsulate information about inlining.
See section 3.3.8 of the DWARF V4 standard for a start.
If a routine X winds up being inlined, the information that would ordinarily get
placed into the subprogram DIE is divided into two partitions: the abstract
attributes such as name, type (which will be the same regardless of whether
we're talking about an inlined function body or an out-of-line function body),
and concrete attributes such as the location for a variable, hi/lo PC or PC
ranges for a function body.
The abstract items are placed into an "abstract" subprogram instance, then each
actual instance of a function body is given a "concrete" instance, which refers
back to its parent abstract instance.
This can be seen in more detail in the "how the generated DWARF should look"
section below.
# Example
```
package s
func Leaf(lx, ly int) int {
return (lx << 7) ^ (ly >> uint32(lx&7))
}
func Mid(mx, my int) int {
var mv [10]int
mv[mx&3] += 2
return mv[my&3] + Leaf(mx+my, my-mx)
}
func Top(tq int) int {
var tv [10]int
tr := Leaf(tq-13, tq+13)
tv[tq&3] = Mid(tq, tq*tq)
return tr + tq + tv[tr&3]
}
```
If the code above is compiled with the existing compiler and the resulting DWARF
inspected, there is a single DW_TAG_subprogram DIE for `Top`, with variable DIEs
reflecting params and (selected) locals for that routine.
Two of the stack-allocated locals from the inlined routines (Mid and Leaf)
survive in the DWARF, but other inlined variables do not:
```
DW_TAG_subprogram {
DW_AT_name: s.Top
...
DW_TAG_variable {
DW_AT_name: tv
...
}
DW_TAG_variable {
DW_AT_name: mv
...
}
DW_TAG_formal_parameter {
DW_AT_name: tq
...
}
DW_TAG_formal_parameter {
DW_AT_name: ~r1
...
}
```
There are also subprogram DIE's for the out-of-line copies of `Leaf` and `Mid`,
which look similar (variable DIEs for locals and params with stack locations).
When enhanced DWARF location tracking is turned on, in addition to more accurate
variable location expressions within `Top`, there are additional DW_TAG_variable
entries for variable such as "lx" and "ly" corresponding those values within the
inlined body of `Leaf`.
Since these vars are directly parented by `Top` there is no way to disambiguate
the various instances of a var such as "lx".
# How the generated DWARF should look
As mentioned above, emitting DWARF records that capture inlining decisions
involves splitting the subprogram DIE for a given function into two pieces, a
single "abstract instance" (containing location-independent info) and then a set
of "concrete instances", one for each instantiation of the function.
Here is a representation of how the generated DWARF should look for the example
above.
First, the abstract subprogram instance for `Leaf`.
No high/lo PC, no locations, for variables etc (these are provided in concrete
instances):
```
DW_TAG_subprogram { // offset: D1
DW_AT_name: s.Leaf
DW_AT_inline : DW_INL_inlined (not declared as inline but inlined)
...
DW_TAG_formal_parameter { // offset: D2
DW_AT_name: lx
DW_AT_type: ...
}
DW_TAG_formal_parameter { // offset: D3
DW_AT_name: ly
DW_AT_type: ...
}
...
}
```
Next we would expect to see a concrete subprogram instance for `s.Leaf`, corresponding to the out-of-line copy of the function (which may wind up being eliminated by the linker if all calls are inlined).
This DIE refers back to its abstract parent via the DW_AT_abstract_origin
attribute, then fills in location details (such as hi/lo PC, variable locations,
etc):
```
DW_TAG_subprogram {
DW_AT_abstract_origin: // reference to D1 above
DW_AT_low_pc : ...
DW_AT_high_pc : ...
...
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D2 above
DW_AT_location: ...
}
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D3 above
DW_AT_location: ...
}
...
}
```
Similarly for `Mid`, there would be an abstract subprogram instance:
```
DW_TAG_subprogram { // offset: D4
DW_AT_name: s.Mid
DW_AT_inline : DW_INL_inlined (not declared as inline but inlined)
...
DW_TAG_formal_parameter { // offset: D5
DW_AT_name: mx
DW_AT_type: ...
}
DW_TAG_formal_parameter { // offset: D6
DW_AT_name: my
DW_AT_type: ...
}
DW_TAG_variable { // offset: D7
DW_AT_name: mv
DW_AT_type: ...
}
}
```
Then a concrete subprogram instance for out-of-line copy of `Mid`.
Note that incorporated into the concrete instance for `Mid` we also see an
inlined instance for `Leaf`.
This DIE (with tag DW_TAG_inlined_subroutine) contains a reference to the
abstract subprogram DIE for `Leaf`, also attributes for the file and line of
the callsite that was inlined:
```
DW_TAG_subprogram {
DW_AT_abstract_origin: // reference to D4 above
DW_AT_low_pc : ...
DW_AT_high_pc : ...
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D5 above
DW_AT_location: ...
}
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D6 above
DW_AT_location: ...
}
DW_TAG_variable {
DW_AT_abstract_origin: // reference to D7 above
DW_AT_location: ...
}
// inlined body of 'Leaf'
DW_TAG_inlined_subroutine {
DW_AT_abstract_origin: // reference to D1 above
DW_AT_call_file: 1
DW_AT_call_line: 10
DW_AT_ranges : ...
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D2 above
DW_AT_location: ...
}
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D3 above
DW_AT_location: ...
}
...
}
}
```
Finally we would expect to see a subprogram instance for `s.Top`.
Note that since `s.Top` is not inlined, we would have a single subprogram DIE
(as opposed to an abstract instance DIE and a concrete instance DIE):
```
DW_TAG_subprogram {
DW_AT_name: s.Top
DW_TAG_formal_parameter {
DW_AT_name: tq
DW_AT_type: ...
}
...
// inlined body of 'Leaf'
DW_TAG_inlined_subroutine {
DW_AT_abstract_origin: // reference to D1 above
DW_AT_call_file: 1
DW_AT_call_line: 15
DW_AT_ranges : ...
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D2 above
DW_AT_location: ...
}
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D3 above
DW_AT_location: ...
}
...
}
DW_TAG_variable {
DW_AT_name: tr
DW_AT_type: ...
}
DW_TAG_variable {
DW_AT_name: tv
DW_AT_type: ...
}
// inlined body of 'Mid'
DW_TAG_inlined_subroutine {
DW_AT_abstract_origin: // reference to D4 above
DW_AT_call_file: 1
DW_AT_call_line: 16
DW_AT_low_pc : ...
DW_AT_high_pc : ...
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D5 above
DW_AT_location: ...
}
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D6 above
DW_AT_location: ...
}
DW_TAG_variable {
DW_AT_abstract_origin: // reference to D7 above
DW_AT_location: ...
}
// inlined body of 'Leaf'
DW_TAG_inlined_subroutine {
DW_AT_abstract_origin: // reference to D1 above
DW_AT_call_file: 1
DW_AT_call_line: 10
DW_AT_ranges : ...
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D2 above
DW_AT_location: ...
}
DW_TAG_formal_parameter {
DW_AT_abstract_origin: // reference to D3 above
DW_AT_location: ...
}
...
}
}
}
```
# Outline of proposed changes
### Changes to the inliner
The inliner manufactures new temporaries for each of the inlined functions
formal parameters; it then creates code to assign the correct "actual"
expression to each temp, and finally walks the inlined body to replace formal
references with temp references.
For proper DWARF generation, we need to have a way to associate each of these
temps with the formal from which it was derived.
It should be possible to create such an association by making sure the temp has
the correct src pos (which refers to the callsite) and by giving the temp the
same name as the formal.
### Changes to debug generation
For the abbreviation table
([`dwarf.dwAbbrev`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/internal/dwarf/dwarf.go#L245)
array), we will need to add abstract and concrete versions of the
DW_TAG_subprogram abbrev entry used for functions to the abbrev list.
top
For a given function,
[`dwarf.PutFunc`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/internal/dwarf/dwarf.go#L687)
will need to emit either an ordinary subprogram DIE (if the function was never
inlined) or an abstract subprogram instance followed by a concrete subprogram
instance, corresponding to the out-of-line version of the function.
It probably makes sense to define a new `dwarf.InlinedCall` type; this will be a
struct holding information on the result of an inlined call in a function:
```
type InlinedCall struct {
Children []*InlinedCall
InlIndex int // index into ctx.InlTree
}
```
Code can be added (presumably in `gc.debuginfo`) that collects a tree of
`dwarf.InlinedCall` objects corresponding to the functions inlined into the
current function being emitted.
This tree can then be used to drive creation of concrete inline instances as
children of the subprogram DIE of the function being emitted.
There will need to be code written that assigns variables and instructions
(progs/PCs) to specific concrete inlined routine instances, similar to what is
being done currently with scopes in
[`gc.assembleScopes`](https://github.com/golang/go/blob/release-branch.go1.9/src/cmd/compile/internal/gc/scope.go#L29).
One wrinkle in that the existing machinery for creating intra-DWARF references
(attributes with form DW_FORM_ref_addr) assumes that the target of the reference
is a top-level DIE with an associated symbol (type, function, etc).
This assumption no longer holds for DW_AT_abstract_origin references to formal
parameters (where the param is a sub-attribute of a top-level DIE).
Some new mechanism will need to be invented to capture this flavor of reference.
### Changes to the linker
There will probably need to be a few changes to the linker to accommodate
abstract origin references, but for the most part I think the bulk of the work
will be done in the compiler.
# Compatibility
The DWARF constructs proposed here require DWARF version 4, however the compiler
is already emitting DWARF V4 as of 1.9.
# Implementation
Plan is for thanm@ to implement this in go 1.10 timeframe.
# Prerequisite Changes
N/A
# Preliminary Results
No data available yet.
Expectation is that this will increase the load module size due to the
additional DWARF records, but not clear to what degree.
# Open issues
Once lexical scope tracking is enhanced to work for regular (not '-l -N')
compilation, we'll want to integrate inlined instance records with scopes (e.g.
if the topmost callsite in question is nested within a scope, then the top-level
inlined instance DIE should be parented by the appropriate scope DIE).
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/51082-godocfmt.md | # Proposal: go/doc: headings, lists, and links in Go doc comments
Russ Cox \
January 2022
Earlier discussion at https://go.dev/issue/48305 and https://go.dev/issue/45533. \
Proposal at https://go.dev/issue/51082.
## Abstract
This proposal improves support for headings, lists, and links in Go doc comments,
while remaining backwards compatible with existing comments.
It includes a new package, `go/doc/comment`, exposing a parsed syntax tree for
doc comments, and it includes changes to `go/printer` and therefore `gofmt`
to format doc comments in a standard way.
<style>
th, td { vertical-align: top; }
</style>
For example, existing lists reformat from the display on the left to the one on the right:
<table>
<tr><td><img src="51082/list-before.png" width="308" height="271">
<td><img src="51082/list-after.png" width="307" height="289">
</table>
URL links can be rewritten to change the display on the left to the one on the right:
<table>
<tr><td><img src="51082/link-before.png" width="307" height="94">
<td><img src="51082/link-after.png" width="308" height="103">
</table>
And package doc comments (and others) can be rewritten to link to specific symbols:
<table>
<tr><td><img src="51082/doclink.png" width="366" height="383">
</table>
(Gerrit's Markdown viewer does not render the images.
See [the GitHub rendering](https://github.com/golang/proposal/blob/master/design/51082-godocfmt.md) instead.)
## Background
Go's doc comments today support plain text and preformatted (HTML \<pre>) blocks,
along with a subtle rule for turning certain lines into headings.
The specific rules are partially documented in the doc comment for [go/doc.ToHTML](https://pkg.go.dev/go/doc@go1.17#ToHTML):
> Each span of unindented non-blank lines is converted into a single paragraph. There is one exception to the rule: a span that consists of a single line, is followed by another paragraph span, begins with a capital letter, and contains no punctuation other than parentheses and commas is formatted as a heading.
>
> A span of indented lines is converted into a \<pre> block, with the common indent prefix removed.
>
> URLs in the comment text are converted into links; if the URL also appears in the words map, the link is taken from the map (if the corresponding map value is the empty string, the URL is not converted into a link).
>
> A pair of (consecutive) backticks (`) is converted to a unicode left quote (“), and a pair of (consecutive) single quotes (') is converted to a unicode right quote (”).
>
> Go identifiers that appear in the words map are italicized; if the corresponding map value is not the empty string, it is considered a URL and the word is converted into a link.
The current Go doc comment format has served us well since its
[introduction in 2009](https://go.googlesource.com/go/+/1605176e25fd).
There has only been one significant change, which was
[the addition of headings in 2011](https://go.googlesource.com/go/+/a6729b3085d7),
now clearly a bad design (see below).
But there are also a few long-open issues and proposals about doc comments, including:
- [#7349](https://go.dev/issue/7349) points out that the headings rule does not work well with non-Roman scripts.
- [#31739](https://go.dev/issue/31739) points out that lines ending with double quotes cannot be headings.
- [#34377](https://go.dev/issue/34377) points out that lines ending with parens cannot be headings.
- [#7873](https://go.dev/issue/7873) asks for list support.
- [#45533](https://go.dev/issue/45533) proposes linking of symbols, written as \[io.EOF], to make it easier to write good top-level doc comments and cross-reference with other packages.
It makes sense, as we approach a decade of experience, to take what we've learned and make one coherent revision, setting the syntax for the next 10 or so years.
### Goals and non-goals
The primary design criteria for Go doc comments was to make them
readable as ordinary comments when viewing the source code directly,
in contrast to systems like
[C#'s Xmldoc](https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/xmldoc/examples),
[Java's Javadoc](https://www.oracle.com/technical-resources/articles/java/javadoc-tool.html), and
[Perl's Perlpod](https://perldoc.perl.org/perlpod).
The goal was to prioritize readability, avoiding syntactic ceremony and complexity.
This remains as a primary goal.
Another concern, new since 2009, is backwards compatibility.
Whatever changes we make, existing doc comments must generally continue to render well.
Less important, but still something to keep in mind, is forward compatibility:
keeping new doc comments rendering well in older Go versions, for a smoother transition.
Another goal for the revamp is that it include writing a separate, standalone web page
explaining how to write Go doc comments.
Today that information is
[squirreled away in the doc.ToHTML comment](https://pkg.go.dev/go/doc@go1.17#ToHTML) and is not easily found or widely known.
Within those constraints, the focus I have set for this revamp is to address the issues listed above.
Specifically:
1. Make the header syntax more predictable. The headings rule is clearly difficult to remember and has too many false negatives. But further adjustments of the current rule run the risk of false positives.
2. Add support for lists. There are many times in documentation when a bullet or numbered list is called for. Those appear in many doc comments today, as indented \<pre> blocks.
3. Add support for links to URLs. Today the only way to link to something is by writing the URL directly, but those can sometimes be quite unreadable and interrupt the text.
4. Add support for links to Go API documentation, in the current package and in other packages. This would have multiple benefits, but one is the ability in large packages to write top-level doc comments that give a good overview and link directly to the functions and types being described.
I believe it also makes sense to add another goal:
5. Add formatting of doc comments to gofmt, to promote consistent appearance and create more room for future changes.
It is not a goal to support every possible kind of documentation or markup. For example:
- Plain text has served us very well so far, and while some might prefer that _comments_ `allow` **font** ***`changes`***, the syntactic ceremony and complexity involved seems not worth the benefit, no matter how it is done.
- People have asked for support for embedding images in documentation (see [#39513](https://github.com/golang/go/issues/39513)), but that adds significant complexity as well: image size hints, different resolutions, image sets, images suitable for both light and dark mode presentation, and so on. It is also difficult ([but not impossible](https://twitter.com/thingskatedid/status/1316074032379248640)) to render them on the command line. Although images clearly have important uses, all this complexity is in direct conflict with the primary goal. For these reasons, images are out of scope. I also note that C#'s Xmldoc and Perl's Perlpod seem not to have image support, although Java's Javadoc does.
Based on a [preliminary GitHub discussion](https://go.dev/issue/48305),
which in turn built on an [earlier discussion of doc links](https://go.dev/issue/45533),
this proposal aims to address headings, lists, and links.
The following subsections elaborate the background for each feature.
### Headings
As noted above, [headings were added in 2011](https://go.googlesource.com/go/+/a6729b3085d7),
and the current documentation says:
> a span that consists of a single line, is followed by another paragraph span, begins with a capital letter, and contains no punctuation other than parentheses and commas is formatted as a heading.
This is not quite accurate.
The code has been updated over time without maintaining the comment.
Today it includes a special case to allow “apostrophe s”
so “Go's doc comments” is a heading,
but “Go's heading rule doesn't work” is not a heading.
On the other hand, Unicode single right quotes are not rejected,
so “Go’s heading rule doesn’t work” is a heading.
On the other other hand, certain Unicode punctuation is rejected,
so that “The § symbol” is not a heading,
even though “The ¶ symbol” is a heading.
The rule also includes a special case for periods,
to permit “The go.dev site” but not “Best. Heading. Ever”.
The rule started out simple but insufficient,
and now the accumulated patches have made it a mess.
### Lists
There is no support for lists today. Documentation needing lists uses indented preformatted text instead.
For example, here are the docs for `cookiejar.PublicSuffixList`:
// PublicSuffixList provides the public suffix of a domain. For example:
// - the public suffix of "example.com" is "com",
// - the public suffix of "foo1.foo2.foo3.co.uk" is "co.uk", and
// - the public suffix of "bar.pvt.k12.ma.us" is "pvt.k12.ma.us".
//
// Implementations of PublicSuffixList must be safe for concurrent use by
// multiple goroutines.
And here are the docs for url.URL.String:
// In the second form, the following rules apply:
// - if u.Scheme is empty, scheme: is omitted.
// - if u.User is nil, userinfo@ is omitted.
// - if u.Host is empty, host/ is omitted.
// - if u.Scheme and u.Host are empty and u.User is nil,
// the entire scheme://userinfo@host/ is omitted.
// - if u.Host is non-empty and u.Path begins with a /,
// the form host/path does not add its own /.
// - if u.RawQuery is empty, ?query is omitted.
// - if u.Fragment is empty, #fragment is omitted.
Ideally, we'd like to adopt a rule that makes these into bullet lists without any edits at all.
### Links
Documentation is more useful with clear links to other web pages.
For example, the encoding/json package doc today says:
// Package json implements encoding and decoding of JSON as defined in
// RFC 7159. The mapping between JSON and Go values is described
// in the documentation for the Marshal and Unmarshal functions.
//
// See "JSON and Go" for an introduction to this package:
// https://golang.org/doc/articles/json_and_go.html
There is no link to the actual RFC 7159, leaving the reader to Google it.
And the link to the “JSON and Go” article must be copied and pasted.
Documentation is also more useful with clear links to other documentation,
whether it's one function linking to another,
preferred version or a top-level doc comment summarizing the overall API of the package,
with links to the key types and functions.
Today there is no way to do this.
Names can be mentioned, of course, but users must find the docs on their own.
## Proposal
This proposal has nine parts:
1. New syntax for headings.
2. New syntax for lists.
3. New syntax for URL links
4. New syntax for documentation links.
5. Reformatting of doc comments by go/printer and gofmt.
6. New documentation for doc comments.
7. Rendering the new syntax on go.dev and pkg.go.dev.
8. A new go/doc/comment package.
9. Changes in the go/doc package.
Note that the new syntax is inspired by and aims to be a subset of Markdown,
but it is not full Markdown. This is discussed in the Rationale section below.
### New syntax for headings
If a span of non-blank lines is a single line beginning with # followed by a space or tab and then additional text, then that line is a heading.
# This is a heading
Here are some examples of variations that do not satisfy the rule and are therefore not headings:
#This is not a heading, because there is no space.
# This is not a heading,
# because it is multiple lines.
# This is not a heading,
because it is also multiple lines.
The next span is not a heading, because there is no additional text:
#
In the middle of a span of non-blank lines,
# this is not a heading either.
# This is not a heading, because it is indented.
The old heading rule will remain valid, which is acceptable since it mainly
has false negatives, not false positives. This will keep existing doc comments
rendering as they do today.
### New syntax for lists
In a span of lines all blank or indented by one or more spaces or tabs
(which would otherwise be a \<pre> block), if the first indented line
begins with a bullet list marker or a numbered list marker, then that
span of indented lines is a bullet list or numbered list. A bullet
list marker is a dash, star, or plus followed by a space or tab and
then text. In a bullet list, each line beginning with a bullet list
marker starts a new list item. A numbered list marker is a decimal
number followed by a period or right parenthesis, then a space or tab,
and then text. In a numbered list, each line beginning with a number
list marker starts a new list item. Item numbers are left as is, never
renumbered (unlike Markdown).
Using this rule, most doc comments with \<pre> bullet lists today will
instead be rendered as proper bullet lists.
Note that the rule means that a list item followed by a blank line
followed by additional indented text continues the list item
(regardless of comparative indentation level):
// Text.
//
// - A bullet.
//
// Another paragraph of that first bullet.
//
// - A second bullet.
Note also that there are no code blocks inside list items—any indented
paragraph following a list item continues the list item, and the list
ends at the next unindented line—nor are there nested lists. This
avoids any the space-counting subtlety like in Markdown.
To re-emphasize, a critical property of this definition of lists is
that it makes existing doc comments written with pseudo-lists turn
into doc comments with real lists.
Markdown recognizes three different bullets: -, \*, and +. In the main
Go repo, the dash is dominant: in comments of the form `` `//[
\t]+[-+*] ` `` (grepping, so some of these may not be in doc
comments), 84% use -, 14% use \*, and 2% use +. In a now slightly
dated corpus of external Go code, the star is dominant: 37.6% -, 61.8%
\*, 0.7% +.
Markdown also recognizes two different numeric list item suffixes:
“1.” and “1)”. In the main Go repo, 66% of comments use “1.” (versus
34% for “1)”). In the external corpus, “1.” is again the dominant
choice, 81% to 19%.
We have two conflicting goals: handle existing comments well, and
avoid needless variation. To satisfy both, all three bullets and both
forms of numbers will be recognized, but gofmt (see below) will
rewrite them to a single canonical form: dash for bullets, and “N.”
for numbers. (Why dashes and not asterisks? Proper typesetting of
bullet lists sometimes does use dashes, but never uses asterisks, so
using dashes keeps the comments looking as typographically clean as
possible.)
### New syntax for URL links
A span of unindented non-blank lines defines link targets when each
line is of the form “[Text]: URL”. In other text, “[Text]” represents
a link to URL using the given text—in HTML, \<a href="URL">Text\</a>.
For example:
// Package json implements encoding and decoding of JSON as defined in
// [RFC 7159]. The mapping between JSON and Go values is described
// in the documentation for the Marshal and Unmarshal functions.
//
// For an introduction to this package, see the article
// “[JSON and Go].”
//
// [RFC 7159]: https://tools.ietf.org/html/rfc7159
// [JSON and Go]: https://golang.org/doc/articles/json_and_go.html
Note that the link definitions can only be given in their own
“paragraph” (span of non-blank unindented lines), which can contain
more than one such definition, one per line. If there is no
corresponding URL declaration, then (except for doc links, described
in the next section) the text is not a hyperlink, and the square
brackets are preserved.
This format only minimally interrupts the flow of the actual text,
since the URLs are moved to a separate section. It also roughly
matches the Markdown [shortcut reference
link](https://spec.commonmark.org/0.30/#shortcut-reference-link)
format, without the optional title text.
### New syntax for documentation links
Doc links are links of the form “[Name1]” or “[Name1.Name2]” to refer
to exported identifiers in the current package, or “[pkg]”,
“[pkg.Name1]”, or “[pkg.Name1.Name2]” to refer to identifiers in other
packages. In the second form, “pkg” can be either a full import path
or the assumed package name of an existing import. The assumed package
name is either the identifier in a renamed import or else [the name
assumed by
goimports](https://cs.opensource.google/go/x/tools/+/refs/tags/v0.1.5:internal/imports/fix.go;l=1128;drc=076821bd2bbc30898f197ea7efb3a48cee295288).
(Goimports inserts renamings when that assumption is not correct, so
this rule should work for essentially all Go code.) A “pkg” is only
assumed to be a full import path if it starts with a domain name (a
path element with a dot) or is one of the packages from the standard
library (“[os]”, “[encoding/json]”, and so on). To avoid problems with
maps, generics, and array types, doc links must be both preceded and
followed by punctuation, spaces, tabs, or the start or end of a line.
For example, if the current package imports encoding/json, then “[json.Decoder]” can be written in place of “[encoding/json.Decoder]” to link to the docs for encoding/json's Decoder.
The implications and potential false positives of this implied URL link are [presented by Joe Tsai here](https://github.com/golang/go/issues/45533#issuecomment-819363364). In particular, the false positive rate appears to be low enough not to worry about.
To illustrate the need for the punctuation restriction, consider:
// Constant folding computes the exact constant value ([constant.Value])
// for every expression ([ast.Expr]) that is a compile-time constant.
versus
// The Types field, a map[ast.Expr]TypeAndValue,
// holds type-checking results for all AST expressions.
and
// A SHA1 hash is a [Size]byte.
### Reformatting doc comments
We propose that gofmt and go/printer reformat doc comments to
a conventional presentation, updating old syntax to new syntax
and standardizing details such as the indentation used for preformatted
blocks, the exact spacing in lists, and so on.
The reformatting would canonicalize a doc comment so that it
renders exactly as before but uses standard layout. Specifically:
- All paragraphs are separated by single blank lines.
- Legacy headings are converted to “#” headings.
- All preformatted blocks are indented by a single tab.
- All preformatted blocks have a blank line before and after.
- All list markers are written as “␠␠-␠“ (space space dash space) or “␠N.␠“ (space number dot space).
- All list item continuation text, including additional paragraphs, is indented by four spaces.
- Lists that themselves contain any blank lines are separated
from the preceding paragraph or heading by a blank line.
- All lists are followed by a blank line (except at the end of the doc comment).
- If there is a blank line anywhere in a list, there are blank lines between all list elements.
- All bullet list items use - as the bullet (\* and + are converted).
- All numbered list items use the “N.” form (“N)” is converted).
- The ASCII double-single-quote forms that have always been
defined to render as “ and ” are replaced with those.
- Link URL definitions are moved to the bottom of the doc comment,
in two different blank-line-separated groups:
definitions used by the doc comment
and definitions not used.
Separating the second group makes it easy first to recognize
that there are unused definitions and second to delete them.
- Tool directive comments, such as `//go:build linux`, are
moved to the end of the doc comment (after link URLs).
The exact details have been chosen to make as few changes as possible
in existing comments, while still converging on a standard formatting
that leaves room for potential future expansion.
Like the rest of gofmt's rules, the exact details matter less than
having a consistent format.
The formatter would not reflow paragraphs, so as not to prohibit use of the
[semantic linefeeds convention](https://rhodesmill.org/brandon/2012/one-sentence-per-line/).
This canonical formatting has the benefit for Markdown aficionados of
being compatible with the Markdown equivalents. The output would still
not be exactly Markdown, since various punctuation would not be (and
does not need to be) escaped, but the block structure Go doc comments
and Markdown have in common would be rendered as valid Markdown.
### New documentation
We propose to add a new page, go.dev/doc/comment, describing how to write doc comments,
and to link to it from the go/doc and go/doc/comment package docs
and other places.
### Rendering the new syntax
We then propose to update golang.org/x/website/cmd/golangorg,
which serves go.dev, as golang.org/x/pkgsite, which serves pkg.go.dev,
to use the new documentation renderer.
### A new go/doc/comment package
The current doc.ToHTML is given only the comment text
and therefore cannot implement import-based links to other identifiers.
Some new API is needed.
The discussion on #48305 identified that many tool builders need
access to a parsed comment's abstract syntax tree,
so we propose to add a new package, go/doc/comment,
that provides both parsing and printing of doc comments.
The existing APIs in go/doc will be preserved and reimplemented
to use this API, but new code would be expected to use the new
entry points.
The new API in go/doc/comment defines an AST for doc comments,
of which the new Doc type is the root.
It defines Parser and Printer structs to hold parser and printer configuration.
The Parser type has a Parse method, turning text into a Doc.
The Printer type has printing methods Comment, HTML, Markdown, and Text,
each taking a Doc and returning the Go doc comment, HTML, Markdown,
and plain text form of the comment.
The full API is listed in the Appendix B below.
(Markdown output is new but implements an [already-accepted
proposal](https://golang.org/issue/34875).)
### Changes in the go/doc package
The existing Synopsis, ToHTML, and ToText top-level functions will continue to work,
but they will not be able to recognize documentation links
and will be marked as deprecated, in favor of new API in the form
of methods on type Package.
The Package type will also add two methods, Parser and Printer:
func (*Package) Parser() *comment.Parser
func (*Package) Printer() *comment.Printer
These return a freshly allocated struct on each call, so that the caller can
customize the parser or printer as needed and then use it to parse
or print a doc comment. The Parser will be configured to recognize
the imported packages and top-level declarations of the given package.
The printer will be a standard Printer for now, but perhaps that would
change in the future, and having p.Printer is more convenient for users.
For example, code that currently uses doc.ToHTML(w, text, words) could change to using:
parser := p.Parser().Parse(text)
parser.Words = words
w.Write(p.Printer().HTML(parser.Parse(text))
There will also be API for the common cases, directly on Package,
to ease migration from the old top-level functions.
The full API is is listed in Appendix A below.
## Rationale
Much of the rationale for the current approach is given above during its description.
The main alternatives would be leaving things as they are (do nothing)
and adopting Markdown.
Doing nothing would leave the problems we have today
as far as headings, lists, and links unsolved.
We could muddle on that way, but this proposal aims to
fix those without introducing unnecessary complexity
nor giving up the current readability of doc comments.
### Markdown is not the answer, but we can borrow good ideas
An obvious suggestion is to switch to Markdown; this is especially
obvious given the discussion being hosted on GitHub where all comments
are written in Markdown. I am fairly convinced Markdown is not the
answer, for a few reasons.
First, there is no single definition of Markdown, as [explained on the
CommonMark site](https://commonmark.org/#why). CommonMark is roughly
what is used on GitHub, Reddit, and Stack Overflow (although even
among those there [can be significant
variation](https://github.github.com/gfm/)). Even so, let's define
Markdown as CommonMark and continue.
Second, Markdown is not backwards compatible with existing doc
comments. Go doc comments require only a single space of indentation
to start a \<pre> block, while Markdown requires more. Also, it is
common for Go doc comments to use Go expressions like \`raw strings\`
or formulas like a\*x^2+b\*x+c. Markdown would instead interpret those
as syntactic markup and render as “`raw strings` or formulas like
a*x^2+b*x+c”. Existing comments would need to be revised to make them
Markdown-safe.
Third, despite frequent claims of readability by its proponents,
Markdown is simply not terribly readable in aggregate. The basics
of Markdown can be simple and punctuation-free, but once you get into
more advanced uses, there is a surfeit of notation which directly
works against the goal of being able to read (and write) program
comments in source files without special tooling. Markdown doc
comments would end up full of backquotes and underscores and stars,
along with backslashes to escape punctuation that would otherwise be
interpreted specially. (Here is my favorite recent example of a
[particularly subtle
issue](https://github.com/commonmark/commonmark-spec/issues/646).)
The claims of readability for Markdown are perhaps made in comparison to HTML,
which is certainly true, but Go aims for even more readability.
Fourth, Markdown is surprisingly complex. Markdown, befitting its Perl
roots, provides more than one way to do just about anything: \_i\_,
\*i\*, and \<em>i\</em>; Setext and ATX headings; indented code blocks
and fenced code blocks; three different ways to write a link; and so
on. There are subtle rules about exactly how many spaces of
indentation are required or allowed in different circumstances. All of
this harms not just readability but also comprehensibility,
learnability, and consistency. The ability to embed arbitrary HTML
adds even more complexity. Developers should be spending their time on
the code, not on arcane details of documentation formatting.
Of course, Markdown is widely used and therefore familiar to many
users. Even though it would be a serious mistake to adopt Markdown in
its entirety, it does make sense to look to Markdown for conventions
that users would already be familiar with, that we can tailor to Go's
needs. If you are a fan of Markdown, you can view this revision as
making Go adopt a (very limited) subset of Markdown. If not, you can
view it as Go adopting a couple extra conventions that can be defined
separately from any Markdown implementation or spec.
## Compatibility
There are no problems as far as the [compatibility guidelines](https://golang.org/doc/go1compat).
Only new API is being added.
Existing API is being updated to continue to work as much as possible.
The exception is that doc.ToHTML and doc.ToText cannot render doc links,
because they do not know what package the docs came from.
But all other rendering, including rendering of existing doc comments,
will continue to work.
It is also worth considering how the new doc comments
with render in older versions of Go, during the transition period
when some people still have old toolchains.
Because headings remain in a paragraph by themselves,
the worst that will happen is that an old toolchain will render
a paragraph beginning with a # instead of a heading.
Because lists remain indented, old toolchains will continue to
show them as preformatted blocks.
And old toolchains will show the raw syntax for links,
which is chosen to be fairly readable in that case.
## Implementation
The core of the proposal is implemented as pending CLs,
to help understand its effect on existing code.
(In fact, the formatting rules have been adjusted,
and API features like ForceBlankBefore exist,
precisely to limit the typical effect on existing code.)
- [CL 384263](https://go.dev/cl/384263) implements the new go/doc/comment package.
- [CL 384264](https://go.dev/cl/384264) makes the changes to go/printer.
- [CL 384265](https://go.dev/cl/384265) makes the changes to go/doc.
- [CL 384266](https://go.dev/cl/384266) updates cmd/doc to use the new API.
- [CL 384268](https://go.dev/cl/384268) shows the effect of reformatting comments in the Go repo.
- [CL 384274](https://go.dev/cl/384274) uses the new package in golang.org/x/website.
If accepted, any adjustments can be made fairly easily.
Russ Cox would take care of landing that work.
There would also be work required in x/pkgsite to use the new APIs and retire that repo's fork of go/doc.
## Appendix: go/doc API changes
Added:
```
func (p *Package) HTML(text string) []byte
HTML returns formatted HTML for the doc comment text.
To customize details of the HTML, use [Package.Printer] to obtain a
[comment.Printer], and configure it before calling its HTML method.
func (p *Package) Markdown(text string) []byte
Markdown returns formatted Markdown for the doc comment text.
To customize details of the Markdown, use [Package.Printer] to obtain a
[comment.Printer], and configure it before calling its Markdown method.
func (p *Package) Parser() *comment.Parser
Parser returns a doc comment parser configured for parsing doc comments from
package p. Each call returns a new parser, so that the caller may customize
it before use.
func (p *Package) Printer() *comment.Printer
Printer returns a doc comment printer configured for printing doc comments
from package p. Each call returns a new printer, so that the caller may
customize it before use.
func (p *Package) Synopsis(text string) string
Synopsis returns a cleaned version of the first sentence in text.
That sentence ends after the first period followed by space and not
preceded by exactly one uppercase letter. The result string has no \n, \r,
or \t characters and uses only single spaces between words. If text starts
with any of the IllegalPrefixes, the result is the empty string.
func (p *Package) Text(text string) []byte
Text returns formatted text for the doc comment text, wrapped to 80 Unicode
code points and using tabs for code block indentation.
To customize details of the formatting, use [Package.Printer] to obtain a
[comment.Printer], and configure it before calling its Text method.
```
Deprecated:
```
func Synopsis(text string) string
Synopsis returns a cleaned version of the first sentence in text.
Deprecated: New programs should use [Package.Synopsis] instead, which
handles links in text properly.
func ToHTML(w io.Writer, text string, words map[string]string)
ToHTML converts comment text to formatted HTML.
Deprecated: ToHTML cannot identify documentation links in the doc comment,
because they depend on knowing what package the text came from, which is not
included in this API.
Given the *[doc.Package] p where text was found, ToHTML(w, text, nil) can be
replaced by:
w.Write(p.HTML(text))
which is in turn shorthand for:
w.Write(p.Printer().HTML(p.Parser().Parse(text)))
If words may be non-nil, the longer replacement is:
parser := p.Parser()
parser.Words = words
w.Write(p.Printer().HTML(parser.Parse(d)))
func ToText(w io.Writer, text string, prefix, codePrefix string, width int)
ToText converts comment text to formatted text.
Deprecated: ToText cannot identify documentation links in the doc comment,
because they depend on knowing what package the text came from, which is not
included in this API.
Given the *[doc.Package] p where text was found, ToText(w, text, "", "\t",
80) can be replaced by:
w.Write(p.Text(text))
In the general case, ToText(w, text, prefix, codePrefix, width) can be
replaced by:
d := p.Parser().Parse(text)
pr := p.Printer()
pr.TextPrefix = prefix
pr.TextCodePrefix = codePrefix
pr.TextWidth = width
w.Write(pr.Text(d))
See the documentation for [Package.Text] and [comment.Printer.Text] for more
details.
```
## Appendix: go/doc/comment API
```
package comment // import "go/doc/comment"
FUNCTIONS
func DefaultLookupPackage(name string) (importPath string, ok bool)
DefaultLookupPackage is the default package lookup function, used when
[Parser].LookupPackage is nil. It recognizes names of the packages from the
standard library with single-element import paths, such as math, which would
otherwise be impossible to name.
Note that the go/doc package provides a more sophisticated lookup based on
the imports used in the current package.
TYPES
type Block interface {
// Has unexported methods.
}
A Block is block-level content in a doc comment, one of *[Code], *[Heading],
*[List], or *[Paragraph].
type Code struct {
// Text is the preformatted text, ending with a newline character.
// It may be multiple lines, each of which ends with a newline character.
// It is never empty, nor does it start or end with a blank line.
Text string
}
A Code is a preformatted code block.
type Doc struct {
// Content is the sequence of content blocks in the comment.
Content []Block
// Links is the link definitions in the comment.
Links []*LinkDef
}
A Doc is a parsed Go doc comment.
type DocLink struct {
Text []Text // text of link
// ImportPath, Recv, and Name identify the Go package or symbol
// that is the link target. The potential combinations of
// non-empty fields are:
// - ImportPath: a link to another package
// - ImportPath, Name: a link to a const, func, type, or var in another package
// - ImportPath, Recv, Name: a link to a method in another package
// - Name: a link to a const, func, type, or var in this package
// - Recv, Name: a link to a method in this package
ImportPath string // import path
Recv string // receiver type, without any pointer star, for methods
Name string // const, func, type, var, or method name
}
A DocLink is a link to documentation for a Go package or symbol.
func (l *DocLink) DefaultURL(baseURL string) string
DefaultURL constructs and returns the documentation URL for l, using baseURL
as a prefix for links to other packages.
The possible forms returned by DefaultURL are:
- baseURL/ImportPath, for a link to another package
- baseURL/ImportPath#Name, for a link to a const, func, type, or var in another package
- baseURL/ImportPath#Recv.Name, for a link to a method in another package
- #Name, for a link to a const, func, type, or var in this package
- #Recv.Name, for a link to a method in this package
If baseURL ends in a trailing slash, then DefaultURL inserts a slash between
ImportPath and # in the anchored forms. For example, here are some baseURL
values and URLs they can generate:
"/pkg/" → "/pkg/math/#Sqrt"
"/pkg" → "/pkg/math#Sqrt"
"/" → "/math/#Sqrt"
"" → "/math#Sqrt"
type Heading struct {
Text []Text // the heading text
}
A Heading is a doc comment heading.
func (h *Heading) DefaultID() string
DefaultID returns the default anchor ID for the heading h.
The default anchor ID is constructed by dropping all characters except A-Z,
a-z, 0-9, space, and tab, converting to lower case, splitting into
space-or-tab-separated fields, and then rejoining the fields using hyphens.
For example, if the heading text is “Everybody Says Don't”, the default ID
is “everybody-says-dont”.
type Italic string
An Italic is a string rendered as italicized text.
type Link struct {
Auto bool // is this an automatic (implicit) link of a literal URL?
Text []Text // text of link
URL string // target URL of link
}
A Link is a link to a specific URL.
type LinkDef struct {
Text string // the link text
URL string // the link URL
Used bool // whether the comment uses the definition
}
A LinkDef is a single link definition.
type List struct {
// Items is the list items.
Items []*ListItem
// ForceBlankBefore indicates that the list must be
// preceded by a blank line when reformatting the comment,
// overriding the usual conditions. See the BlankBefore method.
//
// The comment parser sets ForceBlankBefore for any list
// that is preceded by a blank line, to make sure
// the blank line is preserved when printing.
ForceBlankBefore bool
// ForceBlankBetween indicates that list items must be
// separated by blank lines when reformatting the comment,
// overriding the usual conditions. See the BlankBetween method.
//
// The comment parser sets ForceBlankBetween for any list
// that has a blank line between any two of its items, to make sure
// the blank lines are preserved when printing.
ForceBlankBetween bool
}
A List is a numbered or bullet list. Lists are always non-empty: len(Items)
> 0. In a numbered list, every Items[i].Number is a non-empty string. In a
bullet list, every Items[i].Number is an empty string.
func (l *List) BlankBefore() bool
BlankBefore reports whether a reformatting of the comment should include a
blank line before the list. The default rule is the same as for
[BlankBetween]: if the list item content contains any blank lines (meaning
at least one item has multiple paragraphs) then the list itself must be
preceded by a blank line. A preceding blank line can be forced by setting
[List].ForceBlankBefore.
func (l *List) BlankBetween() bool
BlankBetween reports whether a reformatting of the comment should include a
blank line between each pair of list items. The default rule is that if the
list item content contains any blank lines (meaning at least one item has
multiple paragraphs) then list items must themselves be separated by blank
lines. Blank line separators can be forced by setting
[List].ForceBlankBetween.
type ListItem struct {
// Number is a decimal string in a numbered list
// or an empty string in a bullet list.
Number string // "1", "2", ...; "" for bullet list
// Content is the list content.
// Currently, restrictions in the parser and printer
// require every element of Content to be a *Paragraph.
Content []Block // Content of this item.
}
A ListItem is a single item in a numbered or bullet list.
type Paragraph struct {
Text []Text
}
A Paragraph is a paragraph of text.
type Parser struct {
// Words is a map of Go identifier words that
// should be italicized and potentially linked.
// If Words[w] is the empty string, then the word w
// is only italicized. Otherwise it is linked, using
// Words[w] as the link target.
// Words corresponds to the [go/doc.ToHTML] words parameter.
Words map[string]string
// LookupPackage resolves a package name to an import path.
//
// If LookupPackage(name) returns ok == true, then [name]
// (or [name.Sym] or [name.Sym.Method])
// is considered a documentation link to importPath's package docs.
// It is valid to return "", true, in which case name is considered
// to refer to the current package.
//
// If LookupPackage(name) returns ok == false,
// then [name] (or [name.Sym] or [name.Sym.Method])
// will not be considered a documentation link,
// except in the case where name is the full (but single-element) import path
// of a package in the standard library, such as in [math] or [io.Reader].
// LookupPackage is still called for such names,
// in order to permit references to imports of other packages
// with the same package names.
//
// Setting LookupPackage to nil is equivalent to setting it to
// a function that always returns "", false.
LookupPackage func(name string) (importPath string, ok bool)
// LookupSym reports whether a symbol name or method name
// exists in the current package.
//
// If LookupSym("", "Name") returns true, then [Name]
// is considered a documentation link for a const, func, type, or var.
//
// Similarly, if LookupSym("Recv", "Name") returns true,
// then [Recv.Name] is considered a documentation link for
// type Recv's method Name.
//
// Setting LookupSym to nil is equivalent to setting it to a function
// that always returns false.
LookupSym func(recv, name string) (ok bool)
}
A Parser is a doc comment parser. The fields in the struct can be filled in
before calling Parse in order to customize the details of the parsing
process.
func (p *Parser) Parse(text string) *Doc
Parse parses the doc comment text and returns the *Doc form. Comment markers
(/* // and */) in the text must have already been removed.
type Plain string
A Plain is a string rendered as plain text (not italicized).
type Printer struct {
// HeadingLevel is the nesting level used for
// HTML and Markdown headings.
// If HeadingLevel is zero, it defaults to level 3,
// meaning to use <h3> and ###.
HeadingLevel int
// HeadingID is a function that computes the heading ID
// (anchor tag) to use for the heading h when generating
// HTML and Markdown. If HeadingID returns an empty string,
// then the heading ID is omitted.
// If HeadingID is nil, h.DefaultID is used.
HeadingID func(h *Heading) string
// DocLinkURL is a function that computes the URL for the given DocLink.
// If DocLinkURL is nil, then link.DefaultURL(p.DocLinkBaseURL) is used.
DocLinkURL func(link *DocLink) string
// DocLinkBaseURL is used when DocLinkURL is nil,
// passed to [DocLink.DefaultURL] to construct a DocLink's URL.
// See that method's documentation for details.
DocLinkBaseURL string
// TextPrefix is a prefix to print at the start of every line
// when generating text output using the Text method.
TextPrefix string
// TextCodePrefix is the prefix to print at the start of each
// preformatted (code block) line when generating text output,
// instead of (not in addition to) TextPrefix.
// If TextCodePrefix is the empty string, it defaults to TextPrefix+"\t".
TextCodePrefix string
// TextWidth is the maximum width text line to generate,
// measured in Unicode code points,
// excluding TextPrefix and the newline character.
// If TextWidth is zero, it defaults to 80 minus the number of code points in TextPrefix.
// If TextWidth is negative, there is no limit.
TextWidth int
}
A Printer is a doc comment printer. The fields in the struct can be filled
in before calling any of the printing methods in order to customize the
details of the printing process.
func (p *Printer) Comment(d *Doc) []byte
Comment returns the standard Go formatting of the Doc, without any comment
markers.
func (p *Printer) HTML(d *Doc) []byte
HTML returns an HTML formatting of the Doc. See the [Printer] documentation
for ways to customize the HTML output.
func (p *Printer) Markdown(d *Doc) []byte
Markdown returns a Markdown formatting of the Doc. See the [Printer]
documentation for ways to customize the Markdown output.
func (p *Printer) Text(d *Doc) []byte
Text returns a textual formatting of the Doc. See the [Printer]
documentation for ways to customize the text output.
type Text interface {
// Has unexported methods.
}
A Text is text-level content in a doc comment, one of [Plain], [Italic],
*[Link], or *[DocLink].
```
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/48815-custom-fuzz-input-types.md | # Proposal: Custom Fuzz Input Types
Author: Richard Hansen <rhansen@rhansen.org>
Last updated: 2023-06-08
Discussion at https://go.dev/issue/48815.
## Abstract
Extend [`testing.F.Fuzz`](https://pkg.go.dev/testing#F.Fuzz) to support custom
types, with their own custom mutation logic, as input parameters. This enables
developers to perform [structure-aware
fuzzing](https://github.com/google/fuzzing/blob/master/docs/structure-aware-fuzzing.md).
## Background
As of Go 1.20, `testing.F.Fuzz` only accepts fuzz functions that have basic
parameter types: `[]byte`, `string`, `int`, etc. Custom input types with custom
mutation logic would make it easier to fuzz functions that take complex data
structures as input.
It is technically possible to fuzz such functions using the basic types, but the
benefit is limited:
* A basic input type can be used as a pseudo-random number generator seed to
generate a valid structure at test time. Downsides:
* The seed, not the generated structure, is saved in
`testdata/fuzz/FuzzTestName/*`. This makes it difficult for developers
to examine the structure to figure out why it is interesting. It also
means that a minor change to the structure generation algorithm can
invalidate the entire seed corpus.
* A problematic or interesting structure discovered or created outside of
fuzzing cannot be added to the seed corpus.
* `F.Fuzz` cannot distinguish the structure generation code from the code
under test, so the structure generation code is instrumented and
included in `F.Fuzz`'s analysis. This causes unnecessary slowdowns and
false positives (uninteresting inputs treated as interesting due to
changed coverage).
* `F.Fuzz` has limited ability to explore or avoid "similar" inputs in its
pursuit of new execution paths. (Similar seeds produce pseudo-randomly
independent structures.)
* Multiple input values can be used to populate the fields of the complex
structure. This has many of the same downsides as using a single seed
input.
* Raw input values can be cast as (an encoding of) the complex structure. For
example, a `[]byte` input could be interpreted as a protobuf. Depending on
the specifics, the yield of this approach (the number of bugs it finds) is
likely to be low due to the low probability of generating a syntactically
and semantically valid structure. (Sometimes it is important to attempt
invalid structures to exercise error handling and discover security
vulnerabilities, but this does not apply to function call traces that are
replayed to test a stateful system.)
See [Structure-Aware Fuzzing with
libFuzzer](https://github.com/google/fuzzing/blob/master/docs/structure-aware-fuzzing.md)
for additional background.
## Proposal
Extend `testing.F.Fuzz` to accept fuzz functions with parameter types that
implement the following interface (not exported, just documented in
`testing.F.Fuzz`):
```go
// A customMutator is a fuzz input value that is self-mutating. This interface
// extends the encoding.BinaryMarshaler and encoding.BinaryUnmarshaler
// interfaces.
type customMutator interface {
// MarshalBinary encodes the customMutator's value in a platform-independent
// way (e.g., JSON or Protocol Buffers).
MarshalBinary() ([]byte, error)
// UnmarshalBinary restores the customMutator's value from encoded data
// previously returned from a call to MarshalBinary.
UnmarshalBinary([]byte) error
// Mutate pseudo-randomly transforms the customMutator's value. The mutation
// must be repeatable: every call to Mutate with the same starting value and
// seed must result in the same transformed value.
Mutate(ctx context.Context, seed int64) error
}
```
Also extend the seed corpus file format to support custom values. A line for a
custom value has the following form:
```
custom("type identifier here", []byte("marshal output here"))
```
The type identifier is a globally unique and stable identifier derived from the
value's fully qualified type name, such as `"*example.com/mod/pkg.myType"`.
### Example Usage
```go
package pkg_test
import (
"encoding/json"
"testing"
"github.com/go-loremipsum/loremipsum"
)
type fuzzInput struct{ Word string }
func (v *fuzzInput) MarshalBinary() ([]byte, error) { return json.Marshal(v) }
func (v *fuzzInput) UnmarshalBinary(d []byte) error { return json.Unmarshal(d, v) }
func (v *fuzzInput) Mutate(ctx context.Context, seed int64) error {
v.Word = loremipsum.NewWithSeed(seed).Word()
return nil
}
func FuzzInput(f *testing.F) {
f.Fuzz(func(t *testing.T, v *fuzzInput) {
if v.Word == "lorem" {
t.Fatal("boom!")
}
})
}
```
The fuzzer eventually encounters an input value that causes the test function to
fail, and produces a seed corpus file in `testdata/fuzz` like the following:
```
go test fuzz v1
custom("*example.com/mod/pkg_test.fuzzInput", []byte("{\"Word\":\"lorem\"}"))
```
## Rationale
### Private interface
The `customMutator` interface is not exported for a few reasons:
* Exporting is not strictly required because it does not appear anywhere
outside of internal logic.
* It can be easily exported in the future if needed. The opposite is not true:
un-exporting requires a major version change.
* [YAGNI](https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it): Users are
unlikely to want to declare anything with that type. One possible exception
is a compile-time type check such as the following:
```go
var _ testing.CustomMutator = (*myType)(nil)
```
Such a check is unlikely to have much value: the code is likely being
compiled because tests are about to run, and `testing.F.Fuzz`'s runtime
check will immediately catch the bug.
* Exporting now would add friction to extending `testing.F.Fuzz` again in the
future. Should the new interface be exported even if doing so doesn't add
much value beyond consistency?
### `MarshalBinary`, `UnmarshalBinary` methods
`Marshal` and `Unmarshal` would be shorter to type than `MarshalBinary` and
`UnmarshalBinary`, but the longer names make it easier to extend existing types
that already implement the `encoding.BinaryMarshaler` and
`encoding.BinaryUnmarshaler` interfaces.
`MarshalText` and `UnmarshalText` were considered but rejected because the most
natural representation of a custom type might be binary, not text.
`UnmarshalBinary` is used both to load seed corpus files from disk and to
transmit input values between the coordinator and its workers. Unmarshaling
malformed data from disk is allowed to fail, but unmarshaling after
transmission to another process is expected to always succeed.
`MarshalBinary` is used both to save seed corpus files to disk and to transmit
input values between the coordinator and its workers. Marshaling is expected to
always succeed. Despite this, it returns an error for several reasons:
* to implement the `encoding.BinaryMarshaler` interface
* for symmetry with `UnmarshalBinary`
* to match the APIs provided by packages such as `encoding/json` and
`encoding/gob`
* to discourage the use of `panic`
Panicking is especially problematic because:
* The coordinator process currently interprets a panic as a bug in the code
under test, even if it happens outside of the test function.
* Worker process stdout and stderr is currently suppressed, presumably to
[reduce the amount of output
noise](https://github.com/golang/go/blob/aa4d5e739f32397969fd5c33cbc95d316686039f/src/testing/fuzz.go#L380-L383),
so developers might not notice that a failure is caused by a panic in a
custom input type's method.
### `Mutate` method
The `seed` parameter is an `int64`, not an unsigned integer type as is common
for holding random bits, because that is what
[`math/rand.NewSource`](https://pkg.go.dev/math/rand#NewSource) takes.
The `Mutate` method must be repeatable to avoid violating [an assumption in the
coordinator–worker
protocol](https://github.com/golang/go/blob/0a9875c5c809fa70ae6662b8a38f5f86f648badd/src/internal/fuzz/worker.go#L702-L705).
This may be relaxed in the future by revising the protocol.
Some alternatives for the `Mutate` method were considered:
* `Mutate()`: Simplest, but the lack of a seed parameter makes it difficult
to satisfy the repeatability requirement.
* `Mutate(seed int64)`: Simple. Naturally hints to developers that the method
is expected to be fast, repeatable, and error-free, which increases the
effectiveness of fuzzing. Adding a context parameter or error return value
(or both) might be YAGNI, but their absence makes complex mutation
operations more difficult to implement. The lack of an error return value
encourages the use of `panic`, which is problematic for the reasons
discussed in the `MarshalBinary` rationale above.
* `Mutate(seed int64) error`: The error return value discourages the use of
`panic`, and enables better dev UX when debugging complex mutation
operations.
* `Mutate(ctx context.Context, seed int64) error`: The context makes this more
future-proof by enabling advanced techniques once the repeatability
requirement is removed. For example, `Mutate` could send an RPC to a service
that feeds automatic crash report data to fuzzing tasks to increase the
likelihood of encountering an interesting value. The context parameter and
error return value might be YAGNI, but the added implementation complexity
and developer cognitive load is believed to be minor enough to not worry
about it (they can be ignored in most use cases).
* Accept both `Mutate(seed int64)` and `Mutate(ctx context.Context, seed
int64) error`: The second of the two can be added later after accumulating
additional feedback from developers. Supporting both might result in
unnecessary complexity.
Because mutation operations on custom types are expected to be somewhat complex
(otherwise a basic type would probably suffice), the `Mutate(ctx
context.Context, seed int64) error` option is believed to be the best choice.
### Minimization
To simplify the initial implementation, input types are not minimizable.
Minimizability could be added in the future by accepting a type like the
following and calling its `Minimize` method:
```go
// A customMinimizingMutator is a customMutator that supports attempts to reduce
// the size of an interesting value.
type customMinimizingMutator interface {
customMutator
// Minimize attempts to produce the smallest value (usually defined as
// easiest to process by machine and/or humans) that still provides the same
// coverage as the original value. It repeatedly generates candidates,
// checking each one for suitability with the given callback. It returns
// a suitable candidate if it is satisfied that the candidate is
// sufficiently small or nil if it has given up searching.
Minimize(seed int64, check func(candidate any) (bool, error)) (any, error)
}
```
## Compatibility
No changes in behavior are expected with existing code and seed corpus files.
## Implementation
See https://go.dev/cl/493304 for an initial attempt.
For the initial implementation, a worker can simply panic if one of the custom
type's methods returns an error. A future change can improve UX by plumbing the
error.
No particular Go release is targeted.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/6977-overlapping-interfaces.md | # Proposal: Permit embedding of interfaces with overlapping method sets
Author: Robert Griesemer
Last update: 2019-10-16
Discussion at [golang.org/issue/6977](https://golang.org/issue/6977).
## Summary
We propose to change the language spec such that embedded interfaces may have overlapping method sets.
## Background
An interface specifies a [method set](https://golang.org/ref/spec#Method_sets). One constraint on method sets is that when specifying an interface, each interface method must have a [unique](https://golang.org/ref/spec#Uniqueness_of_identifiers) non-[blank](https://golang.org/ref/spec#Blank_identifier) name.
For methods that are _explicitly_ declared in the interface, this constraint has served us well. There is little reason to explicitly declare the same method more than once in an interface; doing so is at best confusing and likely a typo or copy-paste bug.
But it is easy for multiple embedded interfaces to declare the same method. For example, we might want to (but cannot today) write:
```Go
type ReadWriteCloser interface {
io.ReadCloser
io.WriteCloser
}
```
This phrasing is invalid Go because it adds the same `Close` method to the interface twice, breaking the uniqueness constraint.
Definitions in which embedding breaks the uniqueness constraint arise naturally for various reasons, including embedded interfaces not under programmer control, diamond-shaped interface embeddings, and other valid design choices; see the discussion below and [issue #6977](http://golang.org/issue/6977) for examples. In general it may not be possible or reasonable to ensure that embedded interfaces do not have overlapping method sets. Today, the only recourse in this situation is to fall back to spelling out the interfaces one method at a time, creating duplication and potential for drift between definitions.
Allowing methods contributed by embedded interfaces to duplicate other methods in the interface would make these natural definitions valid Go, with no runtime implication and only trivial compiler changes.
## Proposal
Currently, in the section on [Interface types](https://golang.org/ref/spec#Interface_types), the language specification states:
> An interface `T` may use a (possibly qualified) interface type name `E` in place of a method specification. This is called _embedding_ interface `E` in `T`; it adds all (exported and non-exported) methods of `E` to the interface `T`.
We propose to change this to:
> An interface `T` may use a (possibly qualified) interface type name `E` in place of a method specification. This is called _embedding_ interface `E` in `T`. The method set of `T` is the _union_ of the method sets of `T`’s explicitly declared methods and of `T`’s embedded interfaces.
And to add the following paragraph:
> A _union_ of method sets contains the (exported and non-exported) methods of each method set exactly once, and methods with the same names must have identical signatures.
Alternatively, this new paragraph could be added to the section on [Method sets](https://golang.org/ref/spec#Method_sets).
## Examples
As before, it will not be permitted to _explicitly_ declare the same method multiple times:
```Go
type I interface {
m()
m() // invalid; m was already explicitly declared
}
```
The current spec permits multiple embeddings of an _empty_ interface:
```Go
type E interface {}
type I interface { E; E } // always been valid
```
With this proposal, multiple embeddings of the same interface is generalized to _any_ (not just the empty) interface:
```Go
type E interface { m() }
type I interface { E; E } // becomes valid with this proposal
```
If different embedded interfaces provide a method with the same name, their signatures must match, otherwise the embedding interface is invalid:
```Go
type E1 interface { m(x int) bool }
type E2 interface { m(x float32) bool }
type I interface { E1; E2 } // invalid since E1.m and E2.m have the same name but different signatures
```
## Discussion
A more restricted approach might disallow embedded interfaces from overlapping with the method set defined by the explicitly declared methods of the embedding interface since it is always possible to not declare those “extra” methods. Or in other words, one can always remove explicitly declared methods if they are added via an embedded interface. We believe that would make this language change less robust. For example, consider a hypothetical database API for holding personnel data. A person’s record might be accessible through an interface:
```Go
type Person interface {
Name() string
Age() int
...
}
```
A client might have a more specific implementation storing employees, which are also Persons:
```Go
type Employee interface {
Person
Level() int
…
String() string
}
```
An Employee happens to have a `String` method to ease debugging. If the underlying DB API changes and somebody adds a `String` method to Person, the Employee interface would become invalid, because now `String` would be a duplicated method in `Employee`. To make it work again, one would have to remove the `String` method from the `Employee` interface.
Changing the language to ignore duplicated methods that arise from embedding enables more graceful code evolution (in this case, `Person` adding a String method without breaking `Employee`).
Permitting method sets to overlap with the embedding interface is also a bit simpler to describe in the spec, which helps with keeping the added complexity small.
In summary, we believe that allowing interfaces to have overlapping method sets removes a pain point for many programmers without adding undue complexity to the language and at a minor cost in the implementation.
## Compatibility
This is a backward-compatible language change; any valid existing program will remain valid. This proposal simply expands the set of interfaces that may be embedded in another interface.
## Implementation
The implementation requires:
- Adjusting the compiler’s type-checker to allow overlapping embedded interfaces.
- Adjusting `go/types` analogously.
- Adjusting the Go spec as outlined earlier.
- Adjusting gccgo accordingly (type-checker).
- Testing the changes by adding new tests.
No library changes are required. In particular, reflect only allows listing the methods in an interface; it does not expose information about embedding or other details of the interface definition.
Robert Griesemer will do the spec and `go/types` changes including additional tests, and (probably) also the `cmd/compile` compiler changes. We aim to have all the changes ready at the start of the [Go 1.14 cycle](https://golang.org/wiki/Go-Release-Cycle), around August 1, 2019.
Separately, Ian Lance Taylor will look into the gccgo changes, which is released according to a different schedule.
As noted in our [“Go 2, here we come!” blog post](https://blog.golang.org/go2-here-we-come), the development cycle will serve as a way to collect experience about these new features and feedback from (very) early adopters.
At the release freeze, November 1, we will revisit this proposed feature and decide whether to include it in Go 1.14.
**Update**: These changes were implemented around the beginning of August 2019. The [`cmd/compile` compiler changes](https://golang.org/cl/187519) were done by Matthew Dempsky and turned out to be small. The [`go/types` changes](https://golang.org/cl/191257) required a rewrite of the way type checking of interfaces was implemented because the old code was not easily adjustable to the new semantics. That rewrite led to a significant simplification with a code savings of approx. 400 lines. This proposal forced the rewrite, but the proposal was not the reason for the code savings; the rewrite would have been beneficial either way. (Had the rewrite been done before and independently of this proposal, the change required would have been as small as it was for `cmd/compile` since the relevant code in `go/types` and the compiler closely corresponds.)
## Apendix: A typical example
Below is [an example](https://golang.org/issues/6977#issuecomment-218985935) by [Hasty Granbery](https://github.com/hasty); this example is representative for a common situation - diamond shaped embedding graphs - where people run into problems with the status quo. A few more examples can be found in [issue 6977](https://golang.org/issue/6977).
In this specific scenario, various different database APIs are defined via interfaces to fully hide the implementation and to simplify testing (it's easy to install a mock implementation in the interface). A typical interface might be:
```Go
package user
type Database interface {
GetAccount(accountID uint64) (model.Account, error)
}
```
A few other packages may want to be able to fetch accounts under some circumstances, so they require their databases to have all of `user.Database`'s methods:
```Go
package device
type Database interface {
user.Database
SaveDevice(accountID uint64, device model.Device) error
}
```
```Go
package wallet
type Database interface {
user.Database
ReadWallet(accountID uint64) (model.Wallet, error)
}
```
Finally, there is a package that needs both the `device` and `wallet` `Database`:
```Go
package shopping
type Database interface {
device.Database
wallet.Database
Buy(accountID uint64, deviceID uint64) error
}
```
Since both `device.Database` and `wallet.Database` have the `GetAccount` method, `shopping.Database` is invalid with the current spec. If this proposal is accepted, this code will become valid.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/18802-percpu-sharded.md | # Proposal: percpu.Sharded, an API for reducing cache contention
Discussion at https://golang.org/issue/18802
## Abstract
As it stands, Go programs do not have a good way to avoid contention when
combining highly concurrent code with shared memory locations that frequently
require mutation. This proposal describes a new package and type to satisfy this
need.
## Background
There are several scenarios in which a Go program might want to have shared data
that is mutated frequently. We will briefly discuss some of these scenarios,
such that we can evaluate the proposed API against these concrete use-cases.
### Counters
In RPC servers or in scientific computing code, there is often a need for global
counters. For instance, in an RPC server, a global counter might count the
number of requests received by the server, or the number of bytes received by
the server. Go makes it easy to write RPC servers which are inherently
concurrent, often processing each connection and each request on a concurrent
goroutine. This means that in the context of a multicore machine, several
goroutines can be incrementing the same global counter in parallel. Using an API
like `atomic.AddInt64` will ensure that such a counter is lock-free, but
parallel goroutines will be contending for the same cache line, so this counter
will not experience linear scalability as the number of cores is increased.
(Indeed, one might even expect scalability to unexpectedly decrease due to
increased core-to-core communication).
It's probably also worth noting that there are other similar use-cases in this
space (e.g. types that record distributions rather than just sums, max-value
trackers, etc).
### Read-write locks
It is common in programs to have data that is read frequently, but written very
rarely. In these cases, a common synchronization primitive is `sync.RWMutex`,
which offers an `RLock`/`RUnlock` API for readers. When no writers are
interacting with an `RWMutex`, an arbitrary number of goroutines can use the
read-side of the `RWMutex` without blocking.
However, in order to correctly pair calls to `RLock` with calls to `RUnlock`,
`RWMutex` internally maintains a counter, which is incremented during `RLock`
and decremented during `RUnlock`. There is also other shared mutable state that
is atomically updated inside the `RWMutex` during these calls (and during
calls to `Lock` and `Unlock`). For reasons similar to the previous example, if
many goroutines are acquiring and releasing read-locks concurrently and the
program is running on a multicore machine, then it is likely that the
performance of `RWMutex.RLock`/`RWMutex.RUnlock` will not scale linearly with
the number of cores given to the program.
### RPC clients
In programs that make many RPC calls in parallel, there can be
contention on shared mutable state stored inside the RPC or HTTP clients. For
instance, an RPC client might support connecting to a pool of servers, and
implements a configurable load balancing policy to select a connection to use
for a given RPC; these load balancing policies often need to maintain state for
each connection in the pool of managed connections. For instance, an
implementation of the "Least-Loaded" policy needs to maintain a counter of
active requests per connection, such that a new request can select the
connection with the least number of active requests. In scenarios where a client
is performing a large number of requests in parallel (perhaps enqueueing many
RPCs before finally waiting on them at a later point in the program), then
contention on this internal state can start to affect the rate at which the
requests can be dispatched.
### Order-independent accumulators
In data-processing pipelines, code running in a particular stage may want to
'batch' its output, such that it only sends data downstream in N-element
batches, rather than sending single elements through the pipeline at a time. In
the single-goroutine case and where the element type is 'byte', then the
familiar type `bufio.Writer` implements this pattern. Indeed, one
option for the general data-processing pipeline, is to have a single goroutine
run every stage in the pipeline from end-to-end, and then instantiate a small
number of parallel pipeline instances. This strategy effectively handles
pipelines composed solely of stages dominated by CPU-time. However, if a
pipeline contains any IO (e.g. initially reading the input from a distributed
file system, making RPCs, or writing the result back to a distributed file
system), then this setup will not be efficient, as a single stall in IO will
take out a significant chunk of your throughput.
To mitigate this problem, IO bound stages need to run many goroutines. Indeed,
a clever framework (like Apache Beam) can detect these sorts of situations
dynamically, by measuring the rate of stage input compared to the rate of stage
output; they can even reactively increase or decrease the "concurrency level" of
a stage in response to these measurements. In Beam's case, it might do this by
dynamically changing the number of threads-per-binary, or number of
workers-per-stage.
When stages have varying concurrency levels, but are connected to each other in
a pipeline structure, it is important to place a concurrency-safe abstraction
between the stages to buffer elements waiting to be processed by the next stage.
Ideally, this structure would minimize the contention experienced by the caller.
## The Proposed API
To solve these problems, we propose an API with a single new type
`percpu.Sharded`. Here is an outline of the proposed API.
```go
// Package percpu provides low-level utilities for avoiding contention on
// multicore machines.
package percpu
// A Sharded is a container of homogenously typed values.
//
// On a best effort basis, the runtime will strongly associate a given value
// with a CPU core. That is to say, retrieving a value twice on the same CPU
// core will return the same value with high probablity. Note that the runtime
// cannot guarantee this fact, and clients must assume that retrieved values
// can be shared between concurrently executing goroutines.
//
// Once a value is placed in a Sharded, the Sharded will retain a reference to
// this value permanently. Clients can control the maximum number of distinct
// values created using the SetMaxShards API.
//
// A Sharded must not be copied after first use.
//
// All methods are safe to call from multiple goroutines.
type Sharded struct {
// contains unexported fields
}
// SetMaxShards sets a limit on the maximum number of elements stored in the
// Sharded.
//
// It will not apply retroactively, any elements already created will remain
// inside the Sharded.
//
// If maxShards is less than 1, Sharded will panic.
func (s *Sharded) SetMaxShards(maxShards int)
// GetOrCreate retrieves a value roughly associated with the current CPU. If
// there is no such value, then createFn is called to create a value, store it
// in the Sharded, and return it.
//
// All calls to createFn are serialized; this means that one must complete
// before the next one is started.
//
// createFn should not return nil, or Sharded will panic.
//
// If createFn is called with a ShardInfo.ShardIndex equal to X, no future call
// to GetOrCreate will call createFn again with a ShardInfo.ShardIndex equal to
// X.
func (s *Sharded) GetOrCreate(createFn func(ShardInfo) interface{}) interface{}
// Get retrieves any preexisting value associated with the current CPU. If
// there is no such value, nil is returned.
func (s *Sharded) Get() interface{}
// Do iterates over a snapshot of all elements stored in the Sharded, and calls
// fn once for each element.
//
// If more elements are created during the iteration itself, they may be
// visible to the iteration, but this is not guaranteed. For stronger
// guarantees, see DoLocked.
func (s *Sharded) Do(fn func(interface{}))
// DoLocked iterates over all the elements stored in the Sharded, and calls fn
// once for each element.
//
// DoLocked will observe a consistent snapshot of the elements in the Sharded;
// any previous creations will complete before the iteration begins, and all
// subsequent creations will wait until the iteration ends.
func (s *Sharded) DoLocked(fn func(interface{}))
// ShardInfo contains information about a CPU core.
type ShardInfo struct {
// ShardIndex is strictly less than any call to any prior call to SetMaxShards.
ShardIndex int
}
```
## Evaluating the use-cases
Here, we evaluate the proposed API in light of the use-cases described above.
### Counters
A counter API can be fairly easily built on top of `percpu.Sharded`.
Specifically, it would offer two methods `IncrementBy(int64)`, and `Sum() int64`.
The former would only allow positive increments (if required, clients can build
negative increments by composing two counters of additions and subtractions).
The implementation of `IncrementBy`, would call `GetOrCreate`, passing a
function that returned an `*int64`. To avoid false sharing between cache lines,
it would probably return it as an interior pointer into a struct with
appropriate padding. Once the pointer is retrieved from `GetOrCreate`, the
function would then use `atomic.AddInt64` on that pointer with the value passed
to `IncrementBy`.
The implementation of `Sum` would call `Do` to retrieve a snapshot of all
previously created values, then sum up their values using `atomic.LoadInt64`.
If the application is managing many long-lived counters, then one possible
optimization would be to implement the `Counter` type in terms of a
`counterBatch` (which logically encapsulates `N` independent counters). This can
drastically limit the padding required to fix false sharing between cache lines.
### Read-write locks
It is a little tricky to implement a drop-in replacement for `sync.RWMutex` on
top of `percpu.Sharded`. Naively, one could imagine a sharded lock composed of
many internal `sync.RWMutex` instances. Calling `RLock()` on the aggregate lock
would grab a single `sync.RWMutex` instance using `GetOrCreate` and then call
`RLock()` on that instance. Unfortunately, because there is no state passed
between `RLock()` and `RUnlock()` (something we should probably consider fixing
for Go 2), we cannot implement `RUnlock()` efficiently, as the `percpu.Sharded`
might have migrated to a different shard and therefore we've lost the
association to the original `RLock()`.
That said, since such a sharded lock would be considerably more memory-hungry
than a normal `sync.RWMutex`, callers should only replace truly contended
mutexes with a sharded lock, so requiring them to make minor API changes should
not be too onerous (particularly for mutexes, which should always be private
implementation details, and therefore not cross API boundaries). In particular,
one could have `RLock()` on the sharded lock return a `RLockHandle` type, which
has a `RUnlock()` method. This `RLockHandle` could keep an internal pointer to
the `sync.RWMutex` that was initially chosen, and it can then `RUnlock()` that
specific instance.
It's worth noting that it's also possible to drastically change the standard
library's `sync.RWMutex` implementation itself to be scalable by default using
`percpu.Sharded`; this is why the implementation sketch below is careful not not
use the `sync` package to avoid circular dependencies. See Facebook's
[SharedMutex](https://github.com/facebook/folly/blob/a440441d2c6ba08b91ce3a320a61cf0f120fe7f3/folly/SharedMutex.h#L148)
class to get a sense of how this could be done. However, that requires
significant research and deserves a proposal of its own.
### RPC clients
It's straightforward to use `percpu.Sharded` to implement a sharded RPC client.
This is a case where its likely that the default implementation will continue to
be unsharded, and a program will need to explicitly say something like
`grpc.Dial("some-server", grpc.ShardedClient(4))` (where the "4" might come from
an application flag). This kind of client-contrallable sharding is one place
where the `SetMaxShards` API can be useful.
### Order-independent accumulators
This can be implemented using `percpu.Sharded`. For instance, a writer would
call `GetOrCreate` to retrieve a shard-local buffer, they would acquire a lock,
and insert the element into the buffer. If the buffer became full, they would
flush it downstream.
A watchdog goroutine could walk the buffers periodically using `AppendAll`, and
flush partially-full buffers to ensure that elements are flushed fairly
promptly. If it finds no elements to flush, it could start incrementing a
counter of "useless" scans, and stop scanning after it reaches a threshold. If a
writer is enqueuing the first element in a buffer, and it sees the counter over
the threshold, it could reset the counter, and wake the watchdog.
## Sketch of Implementation
What follows is a rough sketch of an implementation of `percpu.Sharded`. This is
to show that this is implementable, and to give some context to the discussion
of performance below.
First, a sketch of an implementation for `percpu.sharder`, an internal helper
type for `percpu.Sharded`.
```go
const (
defaultUserDefinedMaxShards = 32
)
type sharder struct {
maxShards int32
}
func (s *sharder) SetMaxShards(maxShards int) {
if maxShards < 1 {
panic("maxShards < 1")
}
atomic.StoreInt32(&s.maxShards, roundDownToPowerOf2(int32(maxShards)))
}
func (s *sharder) userDefinedMaxShards() int32 {
s := atomic.LoadInt32(&s.maxShards)
if s == 0 {
return defaultUserDefinedMaxShards
}
return s
}
func (s *sharder) shardInfo() ShardInfo {
shardId := runtime_getShardIndex()
// If we're in race mode, then all bets are off. Half the time, randomize the
// shardId completely, the rest of the time, use shardId 0.
//
// If we're in a test but not in race mode, then we want an implementation
// that keeps cache contention to a minimum so benchmarks work properly, but
// we still want to flush out any assumption of a stable mapping to shardId.
// So half the time, we double the id. This catches fewer problems than what
// we get in race mode, but it should still catch one class of issue (clients
// assuming that two sequential calls to Get() will return the same value).
if raceEnabled {
rnd := runtime_fastRand()
if rnd%2 == 0 {
shardId = 0
} else {
shardId += rnd / 2
}
} else if testing {
if runtime_fastRand()%2 == 0 {
shardId *= 2
}
}
shardId &= runtimeDefinedMaxShards()-1
shardId &= userDefinedMaxShards()-1
return ShardInfo{ShardIndex: shardId}
}
func runtimeDefinedMaxShards() int32 {
max := runtime_getMaxShards()
if (testing || raceEnabled) && max < 4 {
max = 4
}
return max
}
// Implemented in the runtime, should effectively be
// roundUpToPowerOf2(min(GOMAXPROCS, NumCPU)).
// (maybe caching that periodically in the P).
func runtime_getMaxShards() int32 {
return 4
}
// Implemented in the runtime, should effectively be the result of the getcpu
// syscall, or similar. The returned index should densified if possible (i.e.
// if binary is locked to cores 2 and 4), they should return 0 and 1
// respectively, not 2 and 4.
//
// Densification can be best-effort, and done with a process-wide mapping table
// maintained by sysmon periodically.
//
// Does not have to be bounded by runtime_getMaxShards(), or indeed by
// anything.
func runtime_getShardIndex() int32 {
return 0
}
// Implemented in the runtime. Only technically needs an implementation for
// raceEnabled and tests. Should be scalable (e.g. using a per-P seed and
// state).
func runtime_fastRand() int32 {
return 0
}
```
Next, a sketch of `percpu.Sharded` itself.
```go
type Sharded struct {
sharder
lock uintptr
data atomic.Value // *shardedData
typ unsafe.Pointer
}
func (s *Sharded) loadData() *shardedData {
return s.data.Load().(*shardedData)
}
func (s *Sharded) getFastPath(createFn func(ShardInfo) interface{}) (out interface{}) {
shardInfo := s.shardInfo()
curData := s.loadData()
if curData == nil || shardInfo.ShardIndex >= len(curData.elems) {
if createFn == nil {
return nil
}
return s.getSlowPath(shardInfo, createFn)
}
existing := curData.load(shardInfo.ShardIndex)
if existing == nil {
if createFn == nil {
return nil
}
return s.getSlowPath(shardInfo, createFn)
}
outp := (*ifaceWords)(unsafe.Pointer(&out))
outp.typ = s.typ
outp.data = existing
return
}
func (s *Sharded) getSlowPath(shardInfo ShardInfo, createFn func(ShardInfo) interface{}) (out interface{}) {
runtime_lock(&s.lock)
defer runtime_unlock(&s.lock)
curData := s.loadData()
if curData == nil || shardInfo.ShardIndex >= len(curData.elems) {
curData = allocShardedData(curData, shardInfo)
s.data.Store(curData)
}
existing := curData.load(shardInfo.ShardIndex)
if existing != nil {
outp := (*ifaceWords)(unsafe.Pointer(&out))
outp.typ = s.typ
outp.data = existing
return
}
newElem := createFn(shardInfo)
if newElem == nil {
panic("createFn returned nil value")
}
newElemP := *(*ifaceWords)(unsafe.Pointer(&newElem))
// If this is the first call to createFn, then stash the type-pointer for
// later verification.
//
// Otherwise, verify its the same as the previous.
if s.typ == nil {
s.typ = newElemP.typ
} else if s.typ != newElemP.typ {
panic("percpu: GetOrCreate was called with function that returned inconsistently typed value")
}
// Store back the new value.
curData.store(shardInfo.ShardIndex, newElemP.val)
// Return it.
outp := (*ifaceWords)(unsafe.Pointer(&out))
outp.typ = s.typ
outp.data = newElemP.val
}
func (s *Sharded) loadData() *shardedData {
return s.data.Load().(*shardedData)
}
func (s *Sharded) GetOrCreate(createFn func(ShardInfo) interface{}) interface{} {
if createFn == nil {
panic("createFn nil")
}
return s.getFastPath(createFn)
}
func (s *Sharded) Get() interface{} {
return s.getFastPath(nil)
}
func (s *Sharded) Do(fn func(interface{})) {
curData := s.loadData()
if curData == nil {
return nil
}
for i := range curData.elems {
elem := curData.load(i)
if elem == nil {
continue
}
var next interface{}
nextP := (*ifaceWords)(unsafe.Pointer(&next))
nextP.typ = s.typ
nextP.val = elem
fn(next)
}
return elems
}
func (s *Sharded) DoLocked(fn func(interface{})) {
runtime_lock(&s.lock)
defer runtime_unlock(&s.lock)
s.Do(fn)
}
type shardedData struct {
elems []unsafe.Pointer
}
```
## Performance
### `percpu.sharder`
As presented, calling `shardInfo` on a `percpu.sharder` makes two calls to the
runtime, and does a single atomic load.
However, both of the calls to the runtime would be satisfied with stale values.
So, an obvious avenue of optimization is to squeeze these two pieces of
information (effectively "current shard", and "max shards") into a single word,
and cache it on the `P` when first calling the `shardInfo` API. To accommodate
changes in the underlying values, a `P` can store a timestamp when it last
computed these values, and clear the cache when the value is older than `X` and
the `P` is in the process of switching goroutines.
This means that effectively, `shardInfo` will consist of 2 atomic loads, and a
little bit of math on the resulting values.
### `percpu.Sharded`
In the get-for-current-shard path, `percpu.Sharded` will call `shardInfo`, and
then perform 2 atomic loads (to retrieve the list of elements and to retrieve
the specific element for the current shard, respectively). If either of these
loads fails, it might fall back to a much-slower slow path. In the fast path,
there's no allocation.
In the get-all path, `percpu.Sharded` will perform a single atomic followed by a
`O(n)` atomic loads, proportional to the number of elements stored in the
`percpu.Sharded`. It will not allocate.
# Discussion
## Garbage collection of stale values in `percpu.Sharded`
With the given API, if `GOMAXPROCS` is temporarily increased (or the CPUs
assigned to the given program), and then decreased to its original value, a
`percpu.Sharded` might have allocated additional elements to satisfy the
additional CPUs. These additional elements would not be eligible for garbage
collection, as the `percpu.Sharded` would retain an internal reference.
First, its worth noting that we cannot unilaterally shrink the number of
elements stored in a `percpu.Sharded`, because this might affect program
correctness. For instance, this could result in counters losing values, or in
breaking the invariants of sharded locks.
The presented solution just sidesteps this problem by defaulting to a fairly low
value of `MaxShards`. This can be overridden by the user with explicit action
(though the runtime has the freedom to bound the number more strictly than the
user's value, e.g. to limit the size of internal data-structures to reasonable
levels.).
One thing to keep in mind, clients who require garbage collection of stale
values can build this on top of `percpu.Sharded`. For instance, one could
imagine a design where clients would maintain a counter recording each use. A
watchdog goroutine can then scan the elements and if a particular value has not
been used for some period of time, swap in a `nil` pointer, and then gracefully
tear down the value (potentially transferring the logical data encapsulated to
other elements in the `percpu.Sharded`).
Requiring clients to implement their own GC in this way seems kinda gross, but
on the other hand, its unclear to me how to generically solve this problem
without knowledge of the client use-case. One could imagine some sort of
reference-counting design, but again, without knowing the semantics of the
use-case, its hard to know if its safe to clear the reference to the type.
Also, for a performance-oriented type, like `percpu.Sharded`, it seems
unfortunate to add unnecessary synchronization to the fast path of the type (and
I don't see how to implement something in this direction without adding
synchronization).
## Why is `ShardInfo` a struct and not just an int?
This is mostly to retain the ability to extend the API in a compatible manner.
One concrete avenue is to add additional details to allow clients to optimize
their code for the NUMA architecture of the machine. For instance, for a sharded
buffering scheme (i.e. the "Order-independent accumulator" above), it might make
sense to have multiple levels of buffering in play, with another level at the
NUMA-node layer.
## Is `ShardInfo.ShardIndex` returning an id for the CPU, or the `P` executing the goroutine?
This is left unspecified, but the name of the package seems to imply the former.
In practice, I think we want a combination.
That is to say, we would prefer that a program running on a 2-core machine with
`GOMAXPROCS` set to 100 should use 2 shards, not 100. On the other hand, we
would also prefer that a program running on a 100-core machine with `GOMAXPROCS`
set to 2 should also use 2 shards, not 100.
This ideal state should be achievable on systems that provide reasonable APIs to
retrieve the id of the current CPU.
That said, any implementation effort would likely start with a simple portable
implementation which uses the id of the local `P`. This will allow us to get a
sense of the performance of the type, and to serve as a fallback implementation
for platforms where the necessary APIs are either not available, or require
privileged execution.
## Is it a good idea for `percpu.Sharded` to behave differently during tests?
This is a good question; I am not certain of the answer here. I am confident
that during race mode, we should definitely randomize the behaviour of
`percpu.Sharded` significantly (and the implementation sketch above does that).
However, for tests, the answer seems less clear to me.
As presented, the implementation sketch above randomizes the value by flipping
randomly between two values for every CPU. That seems like it will catch bugs
where the client assumes that sequential calls to `Get`/`GetOrCreate` will return
the same values. That amount of randomness seems warranted to me, though I'd
understand if folks would prefer to avoid it in favor of keeping non-test code
and test code behaving identically.
On a more mundane note: I'm not entirely sure if this is implementable with
zero-cost. One fairly efficient strategy would be an internal package that
exposes an "IsTesting bool", which is set by the `testing` package and read by
the `percpu` package. But ideally, this could be optimized away at compile time;
I don't believe we have any mechanism to do this now.
## Should we expose `ShardInfo.ShardIndex` at all?
I think so. Even if we don't, clients can retrieve an effectively equivalent
value by just incrementing an atomic integer inside the `createFn` passed to
`GetOrCreate`. For pre-allocated use-cases (e.g. see the Facebook `SharedMutex`
linked above), it seems important to let clients index into pre-allocated
memory.
## Should we expose both of Get and GetOrCreate?
We could define `GetOrCreate` to behave like `Get` if the passed `createFn` is
nil. This is less API (and might be more efficient, until mid-stack inlining
works), but seems less semantically clean to me. It seems better to just have
clients say what they want explicitly.
## Should we expose both of Do and DoLocked?
If we had to choose one of those, then I would say we should expose `Do`. This
is because it is the higher performance, minimal-synchronization version, and
`DoLocked` can be implemented on top. That said, I do think we should just
provide both. The implementation is simple, and implementing it on top feels
odd.
Of the 4 use-cases presented above, 2 would probably use `Do` (counters and
order-independent accumulators), and 2 would probably use `DoLocked` (read-write
locks, and RPC clients (for the latter, probably just for implementing
`Close()`)).
## Naming
I'm not particularly wedded to any of the names in the API sketch above, so I'm
happy to see it changed to whatever people prefer.
# Backwards compatibility
The API presented above is straightforward to implement without any runtime
support; in particular, this could be implemented as a thin wrapper around a
`sync.Once`. This will not effectively reduce contention, but it would still be
a correct implementation. It's probably a good idea to implement such a shim,
and put it in the `x/sync` repo, with appropriate build tags and type-aliases to
allow clients to immediately start using the new type.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/44309-user-configurable-memory-target.md | # User-configurable memory target
Author: Michael Knyszek
Updated: 16 February 2021
## Background
Issue [#23044](https://golang.org/issue/23044) proposed the addition of some
kind of API to provide a "minimum heap" size; that is, the minimum heap goal
that the GC would ever set.
The purpose of a minimum heap size, as explored in that proposal, is as a
performance optimization: by preventing the heap from shrinking, each GC cycle
will get longer as the live heap shrinks further beyond the minimum.
While `GOGC` already provides a way for Go users to trade off GC CPU time and
heap memory use, the argument against setting `GOGC` higher is that a live heap
spike is potentially dangerous, since the Go GC will use proportionally more
memory with a high proportional constant.
Instead, users (including a [high-profile account by
Twitch](https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i-learnt-to-stop-worrying-and-love-the-heap-26c2462549a2/)
have resorted to using a heap ballast: a large memory allocation that the Go GC
includes in its live heap size, but does not actually take up any resident
pages, according to the OS.
This technique thus effectively sets a minimum heap size in the runtime.
The main disadvantage of this technique is portability.
It relies on implementation-specific behavior, namely that the runtime will not
touch that new allocation, thereby preventing the OS from backing that space
with RAM on Unix-like systems.
It also relies on the Go GC never scanning that allocation.
This technique is also platform-specific, because on Windows such an allocation
would always count as committed.
Today, the Go GC already has a fixed minimum heap size of 4 MiB.
The reasons around this minimum heap size stem largely from a failure to account
for alternative GC work sources.
See [the GC pacer problems meta-issue](https://golang.org/issue/42430) for more
details.
The problems are resolved by a [proposed GC pacer
redesign](https://golang.org/issue/44167).
## Design
I propose the addition of the following API to the `runtime/debug` package:
```go
// SetMemoryTarget provides a hint to the runtime that it can use at least
// amount bytes of memory. amount is the sum total of in-ue Go-related memory
// that the Go runtime can measure.
//
// That explictly includes:
// - Space and fragmentation for goroutine stacks.
// - Space for runtime structures.
// - The size of the heap, with fragmentation.
// - Space for global variables (including dynamically-loaded plugins).
//
// And it explicitly excludes:
// - Read-only parts of the Go program, such as machine instructions.
// - Any non-Go memory present in the process, such as from C or another
// language runtime.
// - Memory required to maintain OS kernel resources that this process has a
// handle to.
// - Memory allocated via low-level functions in the syscall package, like Mmap.
//
// The intuition and goal with this definition is the ability to treat the Go
// part of any system as a black box: runtime overheads and fragmentation that
// are otherwise difficult to account for are explicitly included.
// Anything that is difficult or impossible for the runtime to measure portably
// is excluded. For these cases, the user is encouraged to monitor these
// sources for their particular system and update the memory target as
// necessary.
//
// The runtime is free to ignore the hint at any time.
//
// In practice, the runtime will use this hint to run the garbage collector
// less frequently by using up any additional memory up-front. Any memory used
// beyond that will obey the GOGC trade-off.
//
// If the GOGC mechanism is turned off, the hint is always ignored.
//
// Note that if the memory target is set higher than the amount of memory
// available on the system, the Go runtime may attempt to use all that memory,
// and trigger an out-of-memory condition.
//
// An amount of 0 will retract the hint. A negative amount will always be
// ignored.
//
// Returns the old memory target, or -1 if none was previously set.
func SetMemoryTarget(amount int) int
```
The design of this feature builds off of the [proposed GC pacer
redesign](https://golang.org/issue/44167).
I propose we move forward with almost exactly what issue
[#23044](https://golang.org/issue/23044) proposed, namely exposing the heap
minimum and making it configurable via a runtime API.
The behavior of `SetMemoryTarget` is thus analogous to the common (but
non-standard) Java runtime flag `-Xms` (with Adaptive Size Policy disabled).
With the GC pacer redesign, smooth behavior here should be straightforward to
ensure, as the troubles here basically boil down to the "high `GOGC`" issue
mentioned in that design.
There's one missing piece and that's how to turn the hint (which is memory use)
into a heap goal.
Because the heap goal includes both stacks and globals, I propose that we
compute the heap goal as follows:
```
Heap goal = amount
// These are runtime overheads.
- MemStats.GCSys
- Memstats.MSpanSys
- MemStats.MCacheSys
- MemStats.BuckHashSys
- MemStats.OtherSys
- MemStats.StackSys
// Fragmentation.
- (MemStats.HeapSys-MemStats.HeapInuse)
- (MemStats.StackInuse-(unused portions of stack spans))
```
What this formula leaves us with is a value that should include:
1. Stack space that is actually allocated for goroutine stacks,
1. Global variables (so the part of the binary mapped read-write), and
1. Heap space allocated for objects.
These are the three factors that go into determining the `GOGC`-based heap goal
according to the GC pacer redesign.
Note that while at first it might appear that this definition of the heap goal
will cause significant jitter in what the heap goal is actually set to, runtime
overheads and fragmentation tend to be remarkably stable over the lifetime of a
Go process.
In an ideal world, that would be it, but as the API documentation points out,
there are a number of sources of memory that are unaccounted for that deserve
more explanation.
Firstly, there's the read-only parts of the binary, like the instructions
themselves, but these parts' impact on memory use are murkier since the
operating system tends to de-duplicate this memory between processes.
Furthermore, on platforms like Linux, this memory is always evictable, down to
the last available page.
As a result, I intentionally ignore that factor here.
If the size of the binary is a factor, unfortunately it will be up to the user
to subtract out that size from the amount they pass to `SetMemoryTarget`.
The source of memory is anything non-Go, such as C code (or, say a Python VM)
running in the same process.
These sources also need to be accounted for by the user because this could be
absolutely anything, and portably interacting with the large number of different
possibilities is infeasible.
Luckily, `SetMemoryTarget` is a run-time API that can be made to respond to
changes in external memory sources that Go could not possibly be aware of, so
API recommends updating the target on-line if need be.
Another source of memory use is kernel memory.
If the Go process holds onto kernel resources that use memory within the kernel
itself, those are unaccounted for.
Unfortunately, while this API tries to avoid situations where the user needs to
make conservative estimates, this is one such case.
As far as I know, most systems do not associate kernel memory with a process, so
querying and reacting to this information is just impossible.
The final source of memory is memory that's created by the Go program, but that
the runtime isn't necessarily aware of, like explicitly `Mmap`'d memory.
Theoretically the Go runtime could be aware of this specific case, but this is
tricky to account for in general given the wide range of options that can be
passed to `mmap`-like functionality on various platforms.
Sometimes it's worth accounting for it, sometimes not.
I believe it's best to leave that up to the user.
To validate the design, I ran several [simulations](#simulations) of this
implementation.
In general, the runtime is resilient to a changing heap target (even one that
changes wildly) but shrinking the heap target significantly has the potential to
cause GC CPU utilization spikes.
This is by design: the runtime suddenly has much less runway than it thought
before the change, so it needs to make that up to reach its goal.
The only issue I found with this formulation is the potential for consistent
undershoot in the case where the heap size is very small, mostly because we
place a limit on how late a GC cycle can start.
I think this is OK, and I don't think we should alter our current setting.
This choice means that in extreme cases, there may be some missed performance.
But I don't think it's enough to justify the additional complexity.
### Simulations
These simulations were produced by the same tool as those for the [GC pacer
redesign](https://github.com/golang/go/issues/44167).
That is,
[github.com/mknyszek/pacer-model](https://github.com/mknyszek/pacer-model).
See the GC pacer design document for a list of caveats and assumptions, as well
as a description of each subplot, though the plots are mostly straightforward.
**Small heap target.**
In this scenario, we set a fairly small target (around 64 MiB) as a baseline.
This target is fairly close to what `GOGC` would have picked.
Mid-way through the scenario, the live heap grows a bit.
![](44309/low-heap-target.png)
Notes:
- There's a small amount of overshoot when the live heap size changes, which is
expected.
- The pacer is otherwise resilient to changes in the live heap size.
**Very small heap target.**
In this scenario, we set a fairly small target (around 64 MiB) as a baseline.
This target is much smaller than what `GOGC` would have picked, since the live
heap grows to around 5 GiB.
![](44309/very-low-heap-target.png)
Notes:
- `GOGC` takes over very quickly.
**Large heap target.**
In this scenario, we set a fairly large target (around 2 GiB).
This target is fairly far from what `GOGC` would have picked.
Mid-way through the scenario, the live heap grows a lot.
![](44309/high-heap-target.png)
Notes:
- There's a medium amount of overshoot when the live heap size changes, which is
expected.
- The pacer is otherwise resilient to changes in the live heap size.
**Exceed heap target.**
In this scenario, we set a fairly small target (around 64 MiB) as a baseline.
This target is fairly close to what `GOGC` would have picked.
Mid-way through the scenario, the live heap grows enough such that we exit the
memory target regime and enter the `GOGC` regime.
![](44309/exceed-heap-target.png)
Notes:
- There's a small amount of overshoot when the live heap size changes, which is
expected.
- The pacer is otherwise resilient to changes in the live heap size.
- The pacer smoothly transitions between regimes.
**Exceed heap target with a high GOGC.**
In this scenario, we set a fairly small target (around 64 MiB) as a baseline.
This target is fairly close to what `GOGC` would have picked.
Mid-way through the scenario, the live heap grows enough such that we exit the
memory target regime and enter the `GOGC` regime.
The `GOGC` value is set very high.
![](44309/exceed-heap-target-high-GOGC.png)
Notes:
- There's a small amount of overshoot when the live heap size changes, which is
expected.
- The pacer is otherwise resilient to changes in the live heap size.
- The pacer smoothly transitions between regimes.
**Change in heap target.**
In this scenario, the heap target is set mid-way through execution, to around
256 MiB.
This target is fairly far from what `GOGC` would have picked.
The live heap stays constant, meanwhile.
![](44309/step-heap-target.png)
Notes:
- The pacer is otherwise resilient to changes in the heap target.
- There's no overshoot.
**Noisy heap target.**
In this scenario, the heap target is set once per GC and is somewhat noisy.
It swings at most 3% around 2 GiB.
This target is fairly far from what `GOGC` would have picked.
Mid-way through the live heap increases.
![](44309/low-noise-heap-target.png)
Notes:
- The pacer is otherwise resilient to a noisy heap target.
- There's expected overshoot when the live heap size changes.
- GC CPU utilization bounces around slightly.
**Very noisy heap target.**
In this scenario, the heap target is set once per GC and is very noisy.
It swings at most 50% around 2 GiB.
This target is fairly far from what `GOGC` would have picked.
Mid-way through the live heap increases.
![](44309/high-noise-heap-target.png)
Notes:
- The pacer is otherwise resilient to a noisy heap target.
- There's expected overshoot when the live heap size changes.
- GC CPU utilization bounces around, but not much.
**Large heap target with a change in allocation rate.**
In this scenario, we set a fairly large target (around 2 GiB).
This target is fairly far from what `GOGC` would have picked.
Mid-way through the simulation, the application begins to suddenly allocate much
more aggressively.
![](44309/heavy-step-alloc-high-heap-target.png)
Notes:
- The pacer is otherwise resilient to changes in the live heap size.
- There's no overshoot.
- There's a spike in utilization that's consistent with other simulations of the
GC pacer.
- The live heap grows due to floating garbage from the high allocation rate
causing each GC cycle to start earlier.
### Interactions with other GC mechanisms
Although listed already in the API documentation, there are a few additional
details I want to consider.
#### GOGC
The design of the new pacer means that switching between the "memory target"
regime and the `GOGC` regime (the regimes being defined as the mechanism that
determines the heap goal) is very smooth.
While the live heap times `1+GOGC/100` is less than the heap goal set by the
memory target, we are in the memory target regime.
Otherwise, we are in the `GOGC` regime.
Notice that as `GOGC` rises to higher and higher values, the range of the memory
target regime shrinks.
At infinity, meaning `GOGC=off`, the memory target regime no longer exists.
Therefore, it's very clear to me that the memory target should be completely
ignored if `GOGC` is set to "off" or a negative value.
#### Memory limit
If we choose to also adopt an API for setting a memory limit in the runtime, it
would necessarily always need to override a memory target, though both could
plausibly be active simultaneously.
If that memory limit interacts with `GOGC` being set to "off," then the rule of
the memory target being ignored holds; the memory limit effectively acts like a
target in that circumstance.
If the two are set to an equal value, that behavior is virtually identical to
`GOGC` being set to "off" and *only* a memory limit being set.
Therefore, we need only check that these two cases behave identically.
Note however that otherwise that the memory target and the memory limit define
different regimes, so they're otherwise orthogonal.
While there's a fairly large gap between the two (relative to `GOGC`), the two
are easy to separate.
Where it gets tricky is when they're relatively close, and this case would need
to be tested extensively.
## Risks
The primary risk with this proposal is adding another "knob" to the garbage
collector, with `GOGC` famously being the only one.
Lots of language runtimes provide flags and options that alter the behavior of
the garbage collector, but when the number of flags gets large, maintaining
every possible configuration becomes a daunting, if not impossible task, because
the space of possible configurations explodes with each knob.
This risk is a strong reason to be judicious.
The bar for new knobs is high.
But there are a few good reasons why this might still be important.
The truth is, this API already exists, but is considered unsupported and is
otherwise unmaintained.
The API exists in the form of heap ballasts, a concept we can thank Hyrum's Law
for.
It's already possible for an application to "fool" the garbage collector into
thinking there's more live memory than there actually is.
The downside is resizing the ballast is never going to be nearly as reactive as
the garbage collector itself, because it is at the mercy of the of the runtime
managing the user application.
The simple fact is performance-sensitive Go users are going to write this code
anyway.
It is worth noting that unlike a memory maximum, for instance, a memory target
is purely an optimization.
On the whole, I suspect it's better for the Go ecosystem for there to be a
single solution to this problem in the standard library, rather than solutions
that *by construction* will never be as good.
And I believe we can mitigate some of the risks with "knob explosion."
The memory target, as defined above, has very carefully specified and limited
interactions with other (potential) GC knobs.
Going forward I believe a good criterion for the addition of new knobs should be
that a knob should only be added if it is *only* fully orthogonal with `GOGC`,
and nothing else.
## Monitoring
I propose adding a new metric to the `runtime/metrics` package to enable
monitoring of the memory target, since that is a new value that could change at
runtime.
I propose the metric name `/memory/config/target:bytes` for this purpose.
Otherwise, it could be useful for an operator to understand which regime the Go
application is operating in at any given time.
We currently expose the `/gc/heap/goal:bytes` metric which could theoretically
be used to determine this, but because of the dynamic nature of the heap goal in
this regime, it won't be clear which regime the application is in at-a-glance.
Therefore, I propose adding another metric `/memory/goal:bytes`.
This metric is analagous to `/gc/heap/goal:bytes` but is directly comparable
with `/memory/config/target:bytes` (that is, it includes additional overheads
beyond just what goes into the heap goal, it "converts back").
When this metric "bottoms out" at a flat line, that should serve as a clear
indicator that the pacer is in the "target" regime.
This same metric could be reused for a memory limit in the future, where it will
"top out" at the limit.
## Documentation
This API has an inherent complexity as it directly influences the behavior of
the Go garbage collector.
It also deals with memory accounting, a process that is infamously (and
unfortunately) difficult to wrap one's head around and get right.
Effective of use of this API will come down to having good documentation.
The documentation will have two target audiences: software developers, and
systems administrators (referred to as "developers" and "operators,"
respectively).
For both audiences, it's incredibly important to understand exactly what's
included and excluded in the memory target.
That is why it is explicitly broken down in the most visible possible place for
a developer: the documentation for the API itself.
For the operator, the `runtime/metrics` metric definition should either
duplicate this documentation, or point to the API.
This documentation is important for immediate use and understanding, but API
documentation is never going to be expressive enough.
I propose also introducing a new document to the `doc` directory in the Go
source tree that explains common use-cases, extreme scenarios, and what to
expect in monitoring in these various situations.
This document should include a list of known bugs and how they might appear in
monitoring.
In other words, it should include a more formal and living version of the [GC
pacer meta-issue](https://golang.org/issues/42430).
The hard truth is that memory accounting and GC behavior are always going to
fall short in some cases, and it's immensely useful to be honest and up-front
about those cases where they're known, while always striving to do better.
As every other document in this directory, it would be a living document that
will grow as new scenarios are discovered, bugs are fixed, and new functionality
is made available.
## Alternatives considered
Since this is a performance optimization, it's possible to do nothing.
But as I mentioned in [the risks section](#risks), I think there's a solid
justification for doing *something*.
Another alternative I considered was to provide better hooks into the runtime to
allow users to implement equivalent functionality themselves.
Today, we provide `debug.SetGCPercent` as well as access to a number of runtime
statistics.
Thanks to work done for the `runtime/metrics` package, that information is now
much more efficiently accessible.
By exposing just the right metric, one could imagine a background goroutine that
calls `debug.SetGCPercent` in response to polling for metrics.
The reason why I ended up discarding this alternative, however, is this then
forces the user writing the code that relies on the implementation details of
garbage collector.
For instance, a reasonable implementation of a memory target using the above
mechanism would be to make an adjustment each time the heap goal changes.
What if future GC implementations don't have a heap goal? Furthermore, the heap
goal needs to be sampled; what if GCs are occurring rapidly? Should the runtime
expose when a GC ends? What if the new GC design is fully incremental, and there
is no well-defined notion of "GC end"? It suffices to say that in order to keep
Go implementations open to new possibilities, we should avoid any behavior that
exposes implementation details.
## Go 1 backwards compatibility
This change only adds to the Go standard library's API surface, and is therefore
Go 1 backwards compatible.
## Implementation
Michael Knyszek will implement this.
1. Implement in the runtime.
1. Extend the pacer simulation test suite with this use-case in a variety of
configurations.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go2draft-error-printing.md | # Error Printing — Draft Design
Marcel van Lohuizen\
August 27, 2018
## Abstract
This document is a draft design for additions to the errors package
to define defaults for formatting error messages,
with the aim of making formatting of different error message implementations interoperable.
This includes the printing of detailed information,
stack traces or other position information, localization, and limitations of ordering.
For more context, see the [error values problem overview](go2draft-error-values-overview.md).
## Background
It is common in Go to build your own error type.
Applications can define their own local types or use
one of the many packages that are available for defining errors.
Broadly speaking, errors serve several audiences:
programs, users, and diagnosers.
Programs may need to make decisions based on the value of errors.
This need is addressed in [the error values draft designs](go2draft-error-values-overview.md).
Users need a general idea of what went wrong.
Diagnosers may require more detailed information.
This draft design focuses on providing legible error printing
to be read by people—users and diagnosers—not programs.
When wrapping one error in context to produce a new error,
some error packages distinguish between opaque and transparent
wrappings, which affect whether error inspection is allowed to
see the original error.
This is a valid distinction.
Even if the original error is hidden from programs, however,
it should typically still be shown to people.
Error printing therefore must use an interface method
distinct from the common “next in error chain” methods
like `Cause`, `Reason`, or the error inspection draft design’s `Unwrap`.
There are several packages that have attempted to provide common error interfaces.
These packages typically do not interoperate well with each other or with bespoke error implementations.
Although the interfaces they define are similar, there are implicit assumptions that lead to poor interoperability.
## Design
This design focuses on printing errors legibly, for people to read.
This includes possible stack trace information,
a consistent ordering,
and consistent handling of formatting verbs.
### Error detail
The design allows for an error message to include additional detail
printed upon request, by using special formatting verb `%+v`.
This detail may include stack traces or other detailed information
that would be reasonable to elide in a shorter display.
Of course, many existing error implementations only have
a short display, and we don’t expect them to change.
But implementations that do track additional detail
will now have a standard way to present it.
### Printing API
The error printing API should allow
- consistent formatting and ordering,
- detailed information that is only printed when requested (such as stack traces),
- defining a chain of errors (possibly different from "reasons" or a programmatic chain),
- localization of error messages, and
- a formatting method that is easy for new error implementations to implement.
The design presented here introduces two interfaces
to satisfy these requirements: `Formatter` and `Printer`,
both defined in the [`errors` package](https://golang.org/pkg/errors).
An error that wants to provide additional detail implements the
`errors.Formatter` interface’s `Format` method.
The `Format` method is passed an `errors.Printer`,
which itself has `Print` and `Printf` methods.
package errors
// A Formatter formats error messages.
type Formatter interface {
// Format is implemented by errors to print a single error message.
// It should return the next error in the error chain, if any.
Format(p Printer) (next error)
}
// A Printer creates formatted error messages. It enforces that
// detailed information is written last.
//
// Printer is implemented by fmt. Localization packages may provide
// their own implementation to support localized error messages
// (see for instance golang.org/x/text/message).
type Printer interface {
// Print appends args to the message output.
// String arguments are not localized, even within a localized context.
Print(args ...interface{})
// Printf writes a formatted string.
Printf(format string, args ...interface{})
// Detail reports whether error detail is requested.
// After the first call to Detail, all text written to the Printer
// is formatted as additional detail, or ignored when
// detail has not been requested.
// If Detail returns false, the caller can avoid printing the detail at all.
Detail() bool
}
The `Printer` interface is designed to allow localization.
The `Printer` implementation will typically be supplied by the
[`fmt` package](https://golang.org/pkg/fmt)
but can also be provided by localization frameworks such as
[`golang.org/x/text/message`](https://golang.org/x/text/message).
If instead a `Formatter` wrote to an `io.Writer`,
localization with such packages would not be possible.
In this example, `myAddrError` implements `Formatter`:
Example:
type myAddrError struct {
address string
detail string
err error
}
func (e *myAddrError) Error() string {
return fmt.Sprint(e) // delegate to Format
}
func (e *myAddrError) Format(p errors.Printer) error {
p.Printf("address %s", e.address)
if p.Detail() {
p.Print(e.detail)
}
return e.err
}
This design assumes that the
[`fmt` package](https://golang.org/pkg/fmt)
and localization frameworks will add code to recognize
errors that additionally implement `Formatter`
and use that method for `%+v`.
These packages already recognize `error`; recognizing `Formatter` is only a little more work.
Advantages of this API:
- This API clearly distinguishes informative detail from a causal error chain, giving less rise to confusion.
- Consistency between different error implementations:
- interpretation of formatting flags
- ordering of the error chain
- formatting and indentation
- Less boilerplate for custom error types to implement:
- only one interface to implement besides error
- no need to implement `fmt.Formatter`.
- Flexible: no assumption about the kind of detail information an error implementation might want to print.
- Localizable: packages like golang.org/x/text/message can provide their own implementation of `errors.Printer` to allow translation of messages.
- Detail information is more verbose and somewhat discouraged.
- Performance: a single buffer can be used to print an error.
- Users can implement `errors.Printer` to produce formats.
### Format
Consider an error that returned by `foo` calling `bar` calling `baz`. An idiomatic Go error string would be:
foo: bar(nameserver 139): baz flopped
We suggest the following format for messages with diagnostics detail,
assuming that each layer of wrapping adds additional diagnostics information.
foo:
file.go:123 main.main+0x123
--- bar(nameserver 139):
some detail only text
file.go:456
--- baz flopped:
file.go:789
This output is somewhat akin to that of subtests.
The first message is printed as formatted,
but with the detail indented with 4 spaces.
All subsequent messages are indented 4 spaces
and prefixed with `---` and a space at the start of the message.
Indenting the detail of the first message
avoids ambiguity when multiple multiline errors
are printed one after the other.
### Formatting verbs
Today, `fmt.Printf` already prints errors using these verbs:
- `%s`: `err.Error()` as a string
- `%q`: `err.Error()` as a quoted string
- `%+q`: `err.Error()` as an ASCII-only quoted string
- `%v`: `err.Error()` as a string
- `%#v`: `err` as a Go value, in Go syntax
This design defines `%+v` to print the error in the detailed, multi-line format.
### Interaction with source line information
The following API shows how printing stack traces,
either top of the stack or full stacks per error,
could interoperate with this package
(only showing the parts of the API relevant to this discussion).
package errstack
type Stack struct { ... }
// Format writes the stack information to p, but only
// if detailed printing is requested.
func (s *Stack) Format(p errors.Printer) {
if p.Detail() {
p.Printf(...)
}
}
This package would be used by adding a `Stack` to each error implementation
that wanted to record one:
import ".../errstack"
type myError struct {
msg string
stack errstack.Stack
underlying error
}
func (e *myError) Format(p errors.Printer) error {
p.Printf(e.msg)
e.stack.Format(p)
return e.underlying
}
func newError(msg string, underlying error) error {
return &myError{
msg: msg,
stack: errstack.New(),
}
}
### Localized errors
The [`golang.org/x/text/message` package](https://golang.org/x/text/message)
currently has its own
implementation of `fmt`-style formatting.
It would need to recognize `errors.Formatter` and
provide its own implementation of `errors.Printer`
with a translating `Printf` and localizing `Print`.
import "golang.org/x/text/message"
p := message.NewPrinter(language.Dutch)
p.Printf("Error: %v", err)
Any error passed to `%v` that implements `errors.Formatter`
would use the localization machinery.
Only format strings passed to `Printf` would be translated,
although all values would be localized.
Alternatively, since errors are always text,
we could attempt to translate any error message,
or at least to have `gotext` do static analysis
similarly to what it does now for regular Go code.
To facilitate localization, `golang.org/x/text/message` could implement
an `Errorf` equivalent which delays the substitution of arguments
until it is printed so that it can be properly localized.
The `gotext` tool would have to be modified
to extract error string formats from code.
It should be easy to modify the analysis to pick up static error messages
or error messages that are formatted using an `errors.Printer`'s `Printf` method.
However, calls to `fmt.Errorf` will be problematic,
as it substitutes the arguments prematurely.
We may be able to change `fmt.Errorf` to evaluate and save its arguments
but delay the final formatting.
### Error trees
So far we have assumed that there is a single chain of errors.
To implement formatting a tree of errors, an error list type
could print itself as a new error chain,
returning this single error with the entire chain as detail.
Error list types occur fairly frequently,
so it may be beneficial to standardize on an error list type to ensure consistency.
The default output might look something like this:
foo: bar: baz flopped (and 2 more errors)
The detailed listing would show all the errors:
foo:
--- multiple errors:
bar1
--- baz flopped
bar2
bar3
## Alternate designs
We considered defining multiple optional methods,
to provide fine-grained information such as the underlying error, detailed message, etc.
This had many drawbacks:
- Implementations needed to implement `fmt.Formatter` to correctly handle print verbs,
which was cumbersome and led to inconsistencies and incompatibilities.
- It required having two different methods returning the “next error” in the wrapping chain:
one to report the next for error inspection and one to report the next for printing.
It was difficult to remember which was which.
- Error implementations needed too many methods.
- Most such approaches were incompatible with localization.
We also considered hiding the `Formatter` interface in the `fmt.State` implementation.
This was clumsy to implement and it shared the drawback of requiring error implementation authors
to understand how to implement all the relevant formatting verbs.
## Migration
Packages that currently do their own formatting will have to be rewritten
to use the new interfaces to maximize their utility.
In experimental conversions of
[`github.com/pkg/errors`](https://godoc.org/github.com/pkg/errors),
[`gopkg.in/errgo.v2`](https://godoc.org/gopkg.in/errgo.v2),
and
[`upspin.io/errors`](https://upspin.io/errors),
we found that implementing `Formatter` simplified printing logic considerably,
with the simultaneous benefit of making chains of these errors
interoperable.
This design’s detailed, multiline form is always an expansion of the single-line form,
proceeding through in the same order, outermost to innermost.
Other packages, like [`github.com/pkg/errors`](https://godoc.org/github.com/pkg/errors),
conventionally print detailed errors in the opposite order, contradicting the single-line form.
Users used to reading those errors will need to learn to read the new format.
## Disadvantages
The approach presented here does not provide any standard to programmatically
extract the information that is to be displayed in the messages.
It seems, though, there is no need for this.
The goal of this approach is interoperability and standardization, not providing structured access.
As noted in the previous section, existing error packages that
print detail will need to update their formatting implementations,
and some will find that the reporting order of errors has changed.
This approach does not specify a standard for printing trees.
Providing a standard error list type could help with this.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/12416-cgo-pointers.md | # Proposal: Rules for passing pointers between Go and C
Author: Ian Lance Taylor
Last updated: October, 2015
Discussion at https://golang.org/issue/12416.
## Abstract
List specific rules for when it is safe to pass pointers between Go
and C using cgo.
## Background
Go programmers need to know the rules for how to use cgo safely to
share memory between Go and C.
When using cgo, there is memory allocated by Go and memory allocated
by C.
For this discussion, we define a *Go pointer* to be a pointer to Go
memory, and a *C pointer* to be a pointer to C memory.
The rules that need to be defined are when and how Go code can use C
pointers and C code can use Go pointers.
Note that for this discussion a Go pointer may be any pointer type,
including a pointer to a type defined in C.
Note that some Go values contain Go pointers implicitly, such as
strings, slices, maps, channels, and function values.
It is a generally accepted (but not actually documented) rule that Go
code can use C pointers, and they will work as well or as poorly as C
code holding C pointers.
So the only question is this: when can C code use Go pointers?
The de-facto rule for 1.4 is: you can pass any Go pointer to C.
C code may use it freely.
If C code stores the Go pointer in C memory then there must be a live
copy of the pointer in Go as well.
You can allocate Go memory in C code by calling the Go function
`_cgo_allocate`.
The de-facto rule for 1.5 adds restrictions.
You can still pass any Go pointer to C.
However, C code may not store a Go pointer in Go memory (C code can
still store a Go pointer in C memory, with the same restrictions as
in 1.4).
The `_cgo_allocate` function has been removed.
We do not want to document the 1.5 de-facto restrictions as the
permanent rules because they are somewhat confusing, they limit future
garbage collection choices, and in particular they prohibit any future
development of a moving garbage collector.
## Proposal
I propose that we permit Go code to pass Go pointers to C code, while
preserving the following invariant:
* The Go garbage collector must be aware of the location of all Go
pointers, except for a known set of pointers that are temporarily
*visible to C code*.
The pointers visible to C code exist in an area that the garbage
collector can not see, and the garbage collector may not modify or
release them.
It is impossible to break this invariant in Go code that does not
import "unsafe" and does not call C.
I propose the following rules for passing pointers between Go and C,
while preserving this invariant:
1. Go code may pass a Go pointer to C provided that the Go memory to
which it points does not contain any Go pointers.
* The C code must not store any Go pointers in Go memory, even
temporarily.
* When passing a pointer to a field in a struct, the Go memory in
question is the memory occupied by the field, not the entire
struct.
* When passing a pointer to an element in an array or slice, the Go
memory in question is the entire array or the entire backing array
of the slice.
* Passing a Go pointer to C code means that that Go pointer is
visible to C code; passing one Go pointer does not cause any
other Go pointers to become visible.
* The maximum number of Go pointers that can become visible to C
code in a single function call is the number of arguments to the
function.
2. C code may not keep a copy of a Go pointer after the call returns.
* A Go pointer passed as an argument to C code is only visible to C
code for the duration of the function call.
3. A Go function called by C code may not return a Go pointer.
* A Go function called by C code may take C pointers as arguments,
and it may store non-pointer or C pointer data through those
pointers, but it may not store a Go pointer in memory pointed to
by a C pointer.
* A Go function called by C code may take a Go pointer as an
argument, but it must preserve the property that the Go memory to
which it points does not contain any Go pointers.
* C code calling a Go function can not cause any additional Go
pointers to become visible to C code.
4. Go code may not store a Go pointer in C memory.
* C code may store a Go pointer in C memory subject to rule 2: it
must stop storing the pointer before it returns to Go.
The purpose of these four rules is to preserve the above invariant and
to limit the number of Go pointers visible to C code at any one time.
### Examples
Go code can pass the address of an element of a byte slice to C, and C
code can use pointer arithmetic to access all the data in the slice,
and change it (the C code is of course responsible for doing its own
bounds checking).
Go code can pass a Go string to C. With the current Go compilers it
will look like a two element struct.
Go code can pass the address of a struct to C, and C code can use the
data or change it.
Go code can pass the address of a struct that has pointer fields, but
those pointers must be nil or must be C pointers.
Go code can pass a non-nested Go func value into C, and the C code may
call a Go function passing the func value as an argument, but it must
not save the func value in C memory between calls, and it must not
call the func value directly.
A Go function called by C code may not return a string.
### Consequences
This proposal restricts the Go garbage collector: any Go pointer
passed to C code must be pinned for the duration of the C call.
By definition, since that memory block may not contain any Go
pointers, this will only pin a single block of memory.
Because C code can call back into Go code, and that Go code may need
to copy the stack, we can never pass a Go stack pointer into C code.
Any pointer passed into C code must be treated by the compiler as
escaping, even though the above rules mean that we know it will not
escape.
This is an additional cost to the already high cost of calling C code.
Although these rules are written in terms of cgo, they also apply to
SWIG, which uses cgo internally.
Similar rules may apply to the syscall package. Individual functions
in the syscall package will have to declare what Go pointers are
permitted. This particularly applies to Windows.
That completes the rules for sharing memory and the implementation
restrictions on Go code.
### Support
We turn now to helping programmers use these rules correctly.
There is little we can do on the C side.
Programmers will have to learn that C code may not store Go pointers
in Go memory, and may not keep copies of Go pointers after the
function returns.
We can help programmers on the Go side, by implementing restrictions
within the cgo program.
Let us assume that the C code and any unsafe Go code behaves perfectly.
We want to have a way to test that the Go code never breaks the
invariant.
We propose an expensive dynamic check that may be enabled upon
request, similar to the race detector.
The dynamic checker will be turned on via a new option to go build:
`-checkcgo`.
The dynamic checker will have the following effects:
* We will turn on the write barrier at all times.
Whenever a pointer is written to memory, we will check whether the
pointer is a Go pointer.
If it is, we will check whether we are writing it to Go
memory (including the heap, the stack, global variables).
If we are not, we will report an error.
* We will change cgo to add code to check any pointer value passed to
a C function.
If the value points to memory containing a Go pointer, we will
report an error.
* We will change cgo to add the same check to any pointer value passed
to an exported Go function, except that the check will be done on
function return rather than function entry.
* We will change cgo to check that any pointer returned by an exported
Go function is not a Go pointer.
These rules taken together preserve the invariant.
It will be impossible to write a Go pointer to non-Go memory.
When passing a Go pointer to C, only that Go pointer will be made
visible to C.
The cgo check ensures that no other pointers are exposed.
Although the Go pointer may contain pointer to C memory, the write barrier
ensures that that C memory can not contain any Go pointers.
When C code calls a Go function, no additional Go pointers will become
visible to C.
We propose that we enable the above changes, other than the write
barrier, at all times.
These checks are reasonably cheap.
These checks should detect all violations of the invariant on the Go side.
It is still possible to violate the invariant on the C side.
There is little we can do about this (in the long run we could imagine
writing a Go specific memory sanitizer to catch errors.)
A particular unsafe area is C code that wants to hold on to Go func
and pointer values for future callbacks from C to Go.
This works today but is not permitted by the invariant.
It is hard to detect.
One safe approach is: Go code that wants to preserve funcs/pointers
stores them into a map indexed by an int.
Go code calls the C code, passing the int, which the C code may store
freely.
When the C code wants to call into Go, it passes the int to a Go
function that looks in the map and makes the call.
An explicit call is required to release the value from the map if it
is no longer needed, but that was already true before.
## Rationale
The garbage collector has more flexibility when it has complete
control over all Go pointers.
We want to preserve that flexibility as much as possible.
One simple rule would be to always prohibit passing Go pointers to C.
Unfortunately that breaks existing packages, like github.com/gonum/blas,
that pass slices of floats from Go to C for efficiency.
It also breaks the standard library, which passes the address of a
`C.struct_addrinfo` to `C.getaddrinfo`.
It would be possible to require all such code to change to allocate
their memory in C rather than Go, but it would make cgo considerably
harder to use.
This proposal is an attempt at the next simplest rule.
We permit passing Go pointers to C, but we limit their number, and
require that the garbage collector be aware of exactly which pointers
have been passed.
If a later garbage collector implements moving pointers, cgo will
introduce temporary pins for the duration of the C call.
Rules are necessary, but it's always useful to enforce the rules.
We can not enforce the rules in C code, but we can attempt to do so in
Go code.
If we adopt these rules, we can not change them later, except to
loosen them.
We can, however, change the enforcement mechanism, if we think of
better approaches.
## Compatibility
This rules are intended to extend the Go 1 compatibility guidelines to
the cgo interface.
## Implementation
The implementation of the rules requires adding documentation to the
cgo command.
The implementation of the enforcement mechanism requires changes to
the cgo tool and the go tool.
The goal is to get agreement on this proposal and to complete the work
before the 1.6 freeze date.
## Open issues
Can and should we provide library support for certain operations, like
passing a token for a Go value through C to Go functions called from
C?
Should there be a way for C code to allocate Go memory, where of
course the Go memory may not contain any Go pointers?
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/draft-iofs.md | # File System Interfaces for Go — Draft Design
Russ Cox\
Rob Pike\
July 2020
This is a **Draft Design**, not a formal Go proposal,
because it describes a potential
[large change](https://research.swtch.com/proposals-large#checklist),
with integration changes needed in multiple packages in the standard library
as well potentially in third-party packages.
The goal of circulating this draft design is to collect feedback
to shape an intended eventual proposal.
We are using this change to experiment with new ways to
[scale discussions](https://research.swtch.com/proposals-discuss)
about large changes.
For this change, we will use
[a Go Reddit thread](https://golang.org/s/draft-iofs-reddit)
to manage Q&A, since Reddit's threading support
can easily match questions with answers
and keep separate lines of discussion separate.
There is a [video presentation](https://golang.org/s/draft-iofs-video) of this draft design.
The [prototype code](https://golang.org/s/draft-iofs-code) is available for trying out.
See also the related [embedded files draft design](https://golang.org/s/draft-embed-design), which builds on this design.
## Abstract
We present a possible design for a new Go standard library package `io/fs`
that defines an interface for read-only file trees.
We also present changes to integrate the new package into the standard library.
This package is motivated in part by wanting to add support for
embedded files to the `go` command.
See the [draft design for embedded files](https://golang.org/s/draft-embed-design).
## Background
A hierarchical tree of named files serves as a convenient, useful abstraction
for a wide variety of resources, as demonstrated by Unix, Plan 9, and the HTTP REST idiom.
Even when limited to abstracting disk blocks, file trees come in many forms:
local operating-system files, files stored on other computers,
files in memory, files in other files like ZIP archives.
Go benefits from good abstractions for the data in a single file, such as the
`io.Reader`, `io.Writer`, and related interfaces.
These have been widely implemented and used in the Go ecosystem.
A particular `Reader` or `Writer` might be an operating system file,
a network connection, an in-memory buffer,
a file in a ZIP archive, an HTTP response body,
a file stored on a cloud server, or many other things.
The common, agreed-upon interfaces enable the
creation of useful, general operations like
compression, encryption, hashing, merging, splitting,
and duplication that apply to all these different resources.
Go would also benefit from a good abstraction for a file system tree.
Common, agreed-upon interfaces would help connect the many different
resources that might be presented as file systems
with the many useful generic operations that could be
implemented atop the abstraction.
We started exploring the idea of a file system abstraction years ago,
with an [internal abstraction used in godoc](https://golang.org/cl/4572065).
That code was later extracted as
[golang.org/x/tools/godoc/vfs](https://pkg.go.dev/golang.org/x/tools/godoc/vfs?tab=doc)
and inspired a handful of similar packages.
That interface and its successors seemed too complex to be the
right common abstraction, but they helped us learn more about
what a design might look like.
In the intervening years we've also learned more about
how to use interfaces to model more complex resources.
There have been past discussions about file system interfaces
on [issue 5636](https://golang.org/issue/5636) and [issue 14106](https://golang.org/issue/14106).
This draft design presents a possible official abstraction for a file system tree.
## Design
The core of this design is a new package `io/fs` defining a file system abstraction.
Although the initial interface is limited to read-only file systems,
the design can be extended to support write operations later,
even from third-party packages.
This design also contemplates minor adjustments to the
`archive/zip`,
`html/template`,
`net/http`,
`os`,
and
`text/template`
packages to better implement or consume the file system abstractions.
### The FS interface
The new package `io/fs` defines an `FS` type representing a file system:
type FS interface {
Open(name string) (File, error)
}
The `FS` interface defines the _minimum_ requirement for an implementation:
just an `Open` method.
As we will see, an `FS` implementation may also provide other
methods to optimize operations or add new functionality,
but only `Open` is required.
(Because the package name is `fs`, we need to establish a different
typical variable name for a generic file system.
The prototype code uses `fsys`, as do the examples in this draft design.
The need for such a generic name only arises in code manipulating arbitrary file systems;
most client code will use a meaningful name based on what the file system
contains, such as `styles` for a file system containing CSS files.)
### File name syntax
All `FS` implementations use the same name syntax:
paths are unrooted, slash-separated sequences of path elements,
like Unix paths without the leading slash,
or like URLs without the leading `http://host/`.
Also like in URLs, the separator is a forward slash on all systems, even Windows.
These names can be manipulated using the `path` package.
`FS` path names never contain a ‘`.`’ or ‘`..`’ element except for the
special case that the root directory of a given `FS` file tree is named ‘`.`’.
Paths may be case-sensitive or not, depending on the implementation, so
clients should typically not depend on one behavior or the other.
The use of unrooted names—`x/y/z.jpg` instead of `/x/y/z.jpg`—is
meant to make clear that the name is only meaningful when
interpreted relative to a particular file system root, which is not specified
in the name.
Put another way, the lack of a leading slash makes clear these are
not host file system paths, nor identifiers in some other global name space.
### The File interface
The `io/fs` package also defines a `File` interface representing an open file:
type File interface {
Stat() (os.FileInfo, error)
Read([]byte) (int, error)
Close() error
}
The `File` interface defines the _minimum_ requirements for an implementation.
For `File`, those requirements are
`Stat`, `Read`, and `Close`, with the same meanings as for an `*os.File`.
A `File` implementation may also provide other methods to optimize operations
or add new functionality—for example, an `*os.File` is a valid `File` implementation—but
only these three are required.
If a `File` represents a directory, then just like an `*os.File`,
the `FileInfo` returned by `Stat` will return `true` from `IsDir()` (and from `Mode().IsDir()`).
In this case, the `File` must also implement the `ReadDirFile` interface,
which adds a `ReadDir` method.
The `ReadDir` method has the same semantics as the `*os.File` `Readdir` method,
and (later) this design adds `ReadDir` with a capital D to `*os.File`.)
// A ReadDirFile is a File that implements the ReadDir method for directory reading.
type ReadDirFile interface {
File
ReadDir(n int) ([]os.FileInfo, error)
}
### Extension interfaces and the extension pattern
This `ReadDirFile` interface is an example of an old Go pattern
that we’ve never named before but that we suggest calling
an _extension interface_.
An extension interface embeds a base interface and adds one or more extra methods,
as a way of specifying optional functionality that may be
provided by an instance of the base interface.
An extension interface is named by prefixing the base interface name
with the new method: a `File` with `ReadDir` is a `ReadDirFile`.
Note that this convention can be viewed as a generalization of existing names
like `io.ReadWriter` and `io.ReadWriteCloser`.
That is, an `io.ReadWriter` is an `io.Writer` that also has a `Read` method,
just like a `ReadDirFile` is a `File` that also has a `ReadDir` method.
The `io/fs` package does not define extensions like `ReadAtFile`, `ReadSeekFile`, and so on,
to avoid duplication with the `io` package.
Clients are expected to use the `io` interfaces directly for such operations.
An extension interface can provide access to new functionality not available in a base interface,
or an extension interface can also provide access to a more efficient implementation
of functionality already available, using additional method calls, using the base interface.
Either way, it can be helpful to pair an extension interface with a helper function
that uses the optimized implementation if available and
falls back to what is possible in the base interface otherwise.
An early example of this _extension pattern_—an extension interface paired with a helper
function—is the `io.StringWriter` interface and the `io.WriteString` helper function,
which have been present since Go 1:
package io
// StringWriter is the interface that wraps the WriteString method.
type StringWriter interface {
WriteString(s string) (n int, err error)
}
// WriteString writes the contents of the string s to w, which accepts a slice of bytes.
// If w implements StringWriter, its WriteString method is invoked directly.
// Otherwise, w.Write is called exactly once.
func WriteString(w Writer, s string) (n int, err error) {
if sw, ok := w.(StringWriter); ok {
return sw.WriteString(s)
}
return w.Write([]byte(s))
}
This example deviates from the discussion above in that `StringWriter` is not quite an extension interface:
it does not embed `io.Writer`.
For a single-method interface where the extension method replaces
the original one, not repeating the original method can make sense, as here.
But in general we do embed the original interface, so that code that
tests for the new interface can access the original and new methods using
a single variable.
(In this case, `StringWriter` not embedding `io.Writer` means that `WriteString` cannot call `sw.Write`.
That's fine in this case, but consider instead if `io.ReadSeeker` did not exist:
code would have to test for `io.Seeker` and use separate variables for the `Read` and `Seek` operations.)
### Extensions to FS
`File` had just one extension interface,
in part to avoid duplication with the existing interfaces in `io`.
But `FS` has a handful.
#### ReadFile
One common operation is reading an entire file,
as `ioutil.ReadFile` does for operating system files.
The `io/fs` package provides this functionality using the extension pattern,
defining a `ReadFile` helper function supported by
an optional `ReadFileFS` interface:
func ReadFile(fsys FS, name string) ([]byte, error)
The general implementation of `ReadFile` can call `fs.Open` to obtain a `file` of type `File`,
followed by calls to `file.Read` and a final call to `file.Close`.
But if an `FS` implementation can provide file contents
more efficiently in a single call, it can implement the
`ReadFileFS` interface:
type ReadFileFS interface {
FS
ReadFile(name string) ([]byte, error)
}
The top-level `func ReadFile` first checks to see if its argument `fs` implements `ReadFileFS`.
If so, `func ReadFile` calls `fs.ReadFile`.
Otherwise it falls back to the `Open`, `Read`, `Close` sequence.
For concreteness, here is a complete implementation of `func ReadFile`:
func ReadFile(fsys FS, name string) ([]byte, error) {
if fsys, ok := fsys.(ReadFileFS); ok {
return fsys.ReadFile(name)
}
file, err := fsys.Open(name)
if err != nil {
return nil, err
}
defer file.Close()
return io.ReadAll(file)
}
(This assumes `io.ReadAll` exists; see [issue 40025](https://golang.org/issue/40025).)
#### Stat
We can use the extension pattern again for `Stat` (analogous to `os.Stat`):
type StatFS interface {
FS
Stat(name string) (os.FileInfo, error)
}
func Stat(fsys FS, name string) (os.FileInfo, error) {
if fsys, ok := fsys.(StatFS); ok {
return fsys.Stat(name)
}
file, err := fsys.Open(name)
if err != nil {
return nil, err
}
defer file.Close()
return file.Stat()
}
#### ReadDir
And we can use the extension pattern again for `ReadDir` (analogous to `ioutil.ReadDir`):
type ReadDirFS interface {
FS
ReadDir(name string) ([]os.FileInfo, error)
}
func ReadDir(fsys FS, name string) ([]os.FileInfo, error)
The implementation follows the pattern,
but the fallback case is slightly more complex:
it must handle the case where the named file
does not implement `ReadDirFile` by creating an appropriate error to return.
#### Walk
The `io/fs` package provides a top-level `func Walk` (analogous to `filepath.Walk`)
built using `func ReadDir`,
but there is _not_ an analogous extension interface.
The semantics of `Walk` are such that the only significant
optimization would be to have access to a fast `ReadDir` function.
An `FS` implementation can provide that by implementing `ReadDirFS`.
The semantics of `Walk` are also quite subtle: it is better
to have a single correct implementation than buggy custom ones,
especially if a custom one cannot provide any significant
optimization.
This can still be seen as a kind of extension pattern,
but without the one-to-one match:
instead of `Walk` using `WalkFS`, we have `Walk` reusing `ReadDirFS`.
#### Glob
Another convenience function is `Glob`, analogous to `filepath.Glob`:
type GlobFS interface {
FS
Glob(pattern string) ([]string, error)
}
func Glob(fsys FS, pattern string) ([]string, error)
The fallback case here is not a trivial single call
but instead most of a copy of `filepath.Glob`: it must
decide which directories to read, read them, and look
for matches.
Although `Glob` is like `Walk` in that its implementation
is a non-trivial amount of somewhat subtle code,
`Glob` differs from `Walk` in that a custom implementation
can deliver a significant speedup.
For example, suppose the pattern is `*/gopher.jpg`.
The general implementation has to call `ReadDir(".")`
and then `Stat(dir+"/gopher.jpg")` for every directory
in the list returned by `ReadDir`.
If the `FS` is being accessed over a network and `*`
matches many directories, this sequence requires
many round trips.
In this case, the `FS` could implement a `Glob` method
that answered the call in a single round trip,
sending only the pattern and receiving only the matches,
avoiding all the directories that don't contain `gopher.jpg`.
### Possible future or third-party extensions
This design is limited to the above operations,
which provide basic, convenient, read-only access to a file system.
However, the extension pattern can be applied to add
any new operations we might want in the future.
Even third-party packages can use it; not every
possible file system operation needs to be contemplated in `io/fs`.
For example, the `FS` in this design provides no support
for renaming files.
But it could be added easily, using code like:
type RenameFS interface {
FS
Rename(oldpath, newpath string) error
}
func Rename(fsys FS, oldpath, newpath string) error {
if fsys, ok := fsys.(RenameFS); ok {
return fsys.Rename(oldpath, newpath)
}
return fmt.Errorf("rename %s %s: operation not supported", oldpath, newpath)
}
Note that this code does nothing
that requires being in the `io/fs` package.
A third-party package can define its own `FS` helpers
and extension interfaces.
The `FS` in this design also provides no way to
open a file for writing.
Again, this could be done with the extension pattern,
even from a different package.
If done from a different package, the code might look like:
type OpenFileFS interface {
fs.FS
OpenFile(name string, flag int, perm os.FileMode) (fs.File, error)
}
func OpenFile(fsys FS, name string, flag int, perm os.FileMode) (fs.File, error) {
if fsys, ok := fsys.(OpenFileFS); ok {
return fsys.OpenFile(name, flag, perm)
}
if flag == os.O_RDONLY {
return fs.Open(name)
}
return fmt.Errorf("open %s: operation not supported", name)
}
Note that even if this pattern were implemented in multiple
other packages, they would still all interoperate
(provided the method signatures matched,
which is likely, since package `os` has already defined
the canonical names and signatures).
The interoperation results from the implementations
all agreeing on the shared file system type and file type:
`fs.FS` and `fs.File`.
The extension pattern can be applied to any missing operation:
`Chmod`, `Chtimes`, `Mkdir`, `MkdirAll`, `Sync`, and so on.
Instead of putting them all in `io/fs`,
the design starts small, with read-only operations.
### Adjustments to os
As presented above, the `io/fs` package needs to import `os`
for the `os.FileInfo` interface and the `os.FileMode` type.
These types do not really belong in `os`,
but we had no better home for them when they were introduced.
Now, `io/fs` is a better home,
and they should move there.
This design moves `os.FileInfo` and `os.FileMode` into `io/fs`,
redefining the names in `os` as aliases for the definitions in `io/fs`.
The `FileMode` constants, such as `ModeDir`, would move as well,
redefining the names in `os` as constants copying the `io/fs` values.
No user code will need updating, but the move will make it possible
to implement an `fs.FS` by importing only `io/fs`, not `os`.
This is analogous to `io` not depending on `os`.
(For more about why `io` should not depend on `os`, see
“[Codebase Refactoring (with help from Go)](https://talks.golang.org/2016/refactor.article)”,
especially section 3.)
For the same reason, the type `os.PathError` should move to `io/fs`,
with a forwarding type alias left behind.
The general file system errors `ErrInvalid`, `ErrPermission`,
`ErrExist`, `ErrNotExist`, and `ErrClosed` should also move to `io/fs`.
In this case, those are variables, not types, so no aliases are needed.
The definitions left behind in package `os` would be:
package os
import "io/fs"
var (
ErrInvalid = fs.ErrInvalid
ErrPermission = fs.ErrPermission
...
)
To match `fs.ReadDirFile` and fix casing, the design adds new `os.File` methods
`ReadDir` and `ReadDirNames`, equivalent to the existing `Readdir` and `Readdirnames`.
The old casings should have been corrected long ago;
correcting them now in `os.File` is better than requiring all
implementations of `fs.File` to use the wrong names.
(Adding `ReadDirNames` is not strictly necessary, but we might
as well fix them both at the same time.)
Finally, as code starts to be written that expects an `fs.FS` interface,
it will be natural to want an `fs.FS` backed by an operating system directory.
This design adds a new function `os.DirFS`:
package os
// DirFS returns an fs.FS implementation that
// presents the files in the subtree rooted at dir.
func DirFS(dir string) fs.FS
Note that this function can only be written once the `FileInfo`
type moves into `io/fs`, so that `os` can import `io/fs`
instead of the other way around.
### Adjustments to html/template and text/template
The `html/template` and `text/template` packages each provide
a pair of methods reading from the operating system's file system:
func (t *Template) ParseFiles(filenames ...string) (*Template, error)
func (t *Template) ParseGlob(pattern string) (*Template, error)
The design adds one new method:
func (t *template) ParseFS(fsys fs.FS, patterns ...string) (*Template, error)
Nearly all file names are glob patterns matching only themselves,
so a single call should suffice instead of having to introduce both `ParseFilesFS` and `ParseGlobFS`.
TODO mention top-level calls
### Adjustments to net/http
The `net/http` package defines its own `FileSystem` and `File` types,
used by `http.FileServer`:
type FileSystem interface {
Open(name string) (File, error)
}
type File interface {
io.Closer
io.Reader
io.Seeker
Readdir(count int) ([]os.FileInfo, error)
Stat() (os.FileInfo, error)
}
func FileServer(root FileSystem) Handler
If `io/fs` had come before `net/http`, this code could use `io/fs` directly,
removing the need to define those interfaces.
Since they already exist,
they must be left for compatibility.
The design adds an equivalent to `FileServer` but for an `fs.FS`:
func HandlerFS(fsys fs.FS) Handler
The `HandlerFS` requires of its file system that the opened files support `Seek`.
This is an additional requirement made by HTTP, to support range requests.
Not all file systems need to implement `Seek`.
### Adjustments to archive/zip
Any Go type that represents a tree of files should implement `fs.FS`.
The current `zip.Reader` has no `Open` method,
so this design adds one, with the signature needed
to implement `fs.FS`.
Note that the opened files are streams of bytes decompressed on the fly.
They can be read, but not seeked.
This means a `zip.Reader` now implements `fs.FS` and therefore
can be used as a source of templates passed to `html/template`.
While the same `zip.Reader` can also be passed to
`net/http` using `http.HandlerFS`—that is, such a program would type-check—the
HTTP server would not be able to serve range requests on those files,
for lack of a `Seek` method.
On the other hand, for a small set of files, it might make sense to define
file system middleware that cached copies of the underlying files in memory,
providing seekability and perhaps increased performance, in exchange for
higher memory usage. Such middleware—some kind of `CachingFS`—could be provided
in a third-party package and then used to connect the `zip.Reader` to an `http.HandlerFS`.
Indeed, enabling that kind of middleware is a key goal for this draft design.
Another example might be transparent decryption of the underlying files.
### Adjustments to archive/tar (none)
The design does not include changes to `archive/tar`,
because that format cannot easily support random access:
the first call to `Open` would have to read the entire
archive to find all its files, caching the list for future calls.
And that's only even possible if the underlying `io.Reader`
supports `Seek` or `ReadAt`.
That's a lot of work for an implementation that would be fairly inefficient;
adding it to the standard library would be setting a performance trap.
If needed, the functionality could be provided by a third-party package instead.
## Rationale
### Why now?
The rationale for the specific design decisions is given along with those decisions above.
But there have been discussions about a file system interface for many years, with no progress. Why now?
Two things have changed since those early discussions.
First, we have a direct need for the functionality in the standard library,
and necessity remains the mother of invention.
The [embedded files draft design](https://golang.org/s/draft-embed-design)
aims to add direct support for embedded files to the `go` command,
which raises the question of how to integrate them with the rest of the
standard library.
For example, a common use for embedded files is to parse them as templates
or serve them directly over HTTP.
Without this design, we'd need to define specific methods in those packages
for accepting embedded files.
Defining a file system interface lets us instead add general new methods that will
apply not just to embedded files but also ZIP files and any other kind of resource
presented as an `FS` implementation.
Second, we have more experience with how to use optional interfaces well.
Previous attempts at file system interfaces floundered in the complexity of
defining a complete set of operations.
The results were unwieldy to implement.
This design reduces the necessary implementation to an absolute minimum,
with the extension pattern allowing the provision of new functionality,
even by third-party packages.
### Why not http.FileServer?
The `http.FileServer` and `http.File` interfaces are clearly one of the inspirations
for the new `fs.FS` and `fs.File`, and they have been used beyond HTTP.
But they are not quite right:
every `File` need not be required to implement `Seek` and `Readdir`.
As noted earlier, `text/template` and `html/template` are perfectly happy
reading from a collection of non-seekable files (for example, a ZIP archive).
It doesn't make sense to impose HTTP's requirements on all file systems.
If we are to encourage use of a general interface well beyond HTTP,
it is worth getting right; the cost is only minimal adaptation of
existing `http.FileServer` implementations.
It should also be easy to write general adapters in both directions.
### Why not in golang.org/x?
New API sometimes starts in `golang.org/x`; for example, `context` was originally `golang.org/x/net/context`.
That's not an option here, because one of the key parts of the design
is to define good integrations with the standard library,
and those APIs can't expose references to`golang.org/x`.
(At that point, the APIs might as well be in the standard library.)
## Compatibility
This is all new API.
There are no conflicts with the [compatibility guidelines](https://golang.org/doc/go1compat).
If we'd had `io/fs` before Go 1, some API might have been avoided.
## Implementation
A [prototype implementation](https://golang.org/s/draft-iofs-code) is available.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go2draft-error-handling-overview.md | # Error Handling — Problem Overview
Russ Cox\
August 27, 2018
## Introduction
This overview and the accompanying
[detailed draft design](go2draft-error-handling.md)
are part of a collection of [Go 2 draft design documents](go2draft.md).
The overall goal of the Go 2 effort is to address
the most significant ways that Go fails to scale
to large code bases and large developer efforts.
One way that Go programs fail to scale well is in the
writing of error-checking and error-handling code.
In general Go programs have too much code checking errors
and not enough code handling them.
(This will be illustrated below.)
The draft design aims to address this problem by introducing
lighter-weight syntax for error checks
than the current idiomatic assignment-and-if-statement combination.
As part of Go 2, we are also considering, as a separate concern,
changes to the [semantics of error values](go2draft-error-values-overview.md),
but this document is only about error checking and handling.
## Problem
To scale to large code bases, Go programs must be lightweight,
[without undue repetition](https://www.youtube.com/watch?v=5kj5ApnhPAE),
and also robust,
[dealing gracefully with errors](https://www.youtube.com/watch?v=lsBF58Q-DnY)
when they do arise.
In the design of Go, we made a conscious choice
to use explicit error results and explicit error checks.
In contrast, C most typically uses explicit checking
of an implicit error result, [errno](http://man7.org/linux/man-pages/man3/errno.3.html),
while exception handling—found in many languages,
including C++, C#, Java, and Python—represents implicit checking of implicit results.
The subtleties of implicit checking are covered well in
Raymond Chen’s pair of blog posts,
"[Cleaner, more elegant, and wrong](https://devblogs.microsoft.com/oldnewthing/20040422-00/?p=39683)" (2004),
and "[Cleaner, more elegant, and harder to recognize](https://devblogs.microsoft.com/oldnewthing/20050114-00/?p=36693)" (2005).
In essence, because you can’t see implicit checks at all,
it is very hard to verify by inspection that the error handling code
correctly recovers from the state of the program at the time the check fails.
For example, consider this code, written in a hypothetical dialect of Go with exceptions:
func CopyFile(src, dst string) throws error {
r := os.Open(src)
defer r.Close()
w := os.Create(dst)
io.Copy(w, r)
w.Close()
}
It is nice, clean, elegant code.
It is also invisibly wrong: if `io.Copy` or `w.Close` fails,
the code does not remove the partially-written `dst` file.
On the other hand, the equivalent actual Go code today would be:
func CopyFile(src, dst string) error {
r, err := os.Open(src)
if err != nil {
return err
}
defer r.Close()
w, err := os.Create(dst)
if err != nil {
return err
}
defer w.Close()
if _, err := io.Copy(w, r); err != nil {
return err
}
if err := w.Close(); err != nil {
return err
}
}
This code is not nice, not clean, not elegant, and still wrong:
like the previous version, it does not remove `dst` when `io.Copy` or `w.Close` fails.
There is a plausible argument that at least a visible check
could prompt an attentive reader to wonder about
the appropriate error-handling response at that point in the code.
In practice, however, error checks take up so much space
that readers quickly learn to skip them to see the structure of the code.
This code also has a second omission in its error handling.
Functions should typically [include relevant information](https://golang.org/doc/effective_go.html#errors)
about their arguments in their errors,
like `os.Open` returning the name of the file being opened.
Returning the error unmodified produces a failure
without any information about the sequence of operations that led to the error.
In short, this Go code has too much error checking
and not enough error handling.
A more robust version with more helpful errors would be:
func CopyFile(src, dst string) error {
r, err := os.Open(src)
if err != nil {
return fmt.Errorf("copy %s %s: %v", src, dst, err)
}
defer r.Close()
w, err := os.Create(dst)
if err != nil {
return fmt.Errorf("copy %s %s: %v", src, dst, err)
}
if _, err := io.Copy(w, r); err != nil {
w.Close()
os.Remove(dst)
return fmt.Errorf("copy %s %s: %v", src, dst, err)
}
if err := w.Close(); err != nil {
os.Remove(dst)
return fmt.Errorf("copy %s %s: %v", src, dst, err)
}
}
Correcting these faults has only made the code more correct, not cleaner or more elegant.
## Goals
For Go 2, we would like to make error checks more lightweight,
reducing the amount of Go program text dedicated to error checking.
We also want to make it more convenient to write error handling,
raising the likelihood that programmers will take the time to do it.
Both error checks and error handling must remain explicit,
meaning visible in the program text.
We do not want to repeat the pitfalls of exception handling.
Existing code must keep working and remain as valid as it is today.
Any changes must interoperate with existing code.
As mentioned above, it is not a goal of this draft design
to change or augment the semantics of errors.
For that discussion see the [error values problem overview](go2draft-error-values-overview.md).
## Draft Design
This section quickly summarizes the draft design,
as a basis for high-level discussion and comparison with other approaches.
The draft design introduces two new syntactic forms.
First, it introduces a checked expression `check f(x, y, z)` or `check err`,
marking an explicit error check.
Second, it introduces a `handle` statement defining an error handler.
When an error check fails, it transfers control to the innermost handler,
which transfers control to the next handler above it,
and so on, until a handler executes a `return` statement.
For example, the corrected code above shortens to:
func CopyFile(src, dst string) error {
handle err {
return fmt.Errorf("copy %s %s: %v", src, dst, err)
}
r := check os.Open(src)
defer r.Close()
w := check os.Create(dst)
handle err {
w.Close()
os.Remove(dst) // (only if a check fails)
}
check io.Copy(w, r)
check w.Close()
return nil
}
The `check`/`handle` combination is permitted in functions
that do not themselves return errors.
For example, here is a main function from a
[useful but trivial program](https://github.com/rsc/tmp/blob/master/unhex/main.go):
func main() {
hex, err := ioutil.ReadAll(os.Stdin)
if err != nil {
log.Fatal(err)
}
data, err := parseHexdump(string(hex))
if err != nil {
log.Fatal(err)
}
os.Stdout.Write(data)
}
It would be shorter and clearer to write instead:
func main() {
handle err {
log.Fatal(err)
}
hex := check ioutil.ReadAll(os.Stdin)
data := check parseHexdump(string(hex))
os.Stdout.Write(data)
}
For details, see the [draft design](go2draft-error-handling.md).
## Discussion and Open Questions
These draft designs are meant only as a starting point for community discussion.
We fully expect the details to be revised based on feedback and especially experience reports.
This section outlines some of the questions that remain to be answered.
**Check versus try**.
The keyword `check` is a clear statement of what is being done.
Originally we used the well-known exception keyword `try`.
This did read well for function calls:
data := try parseHexdump(string(hex))
But it did not read well for checks applied to error values:
data, err := parseHexdump(string(hex))
if err == ErrBadHex {
... special handling ...
}
try err
In this case, `check err` is a clearer description than `try err`.
Rust originally used `try!` to mark an explicit error check
but moved to a special `?` operator instead.
Swift also uses `try` to mark an explicit error check,
but also `try!` and `try?`, and as part of a broader
analogy to exception-handling that also includes `throw` and `catch`.
Overall it seems that the draft design’s `check`/`handle`
are sufficiently different from exception handling
and from Rust and Swift to justify the clearer keyword,
`check`, over the more familiar one, `try`.
Both Rust and Swift are discussed in more detail below.
**Defer**.
The error handling is in some ways similar to [`defer`](https://golang.org/ref/spec#Defer_statements) and
[`recover`](https://golang.org/ref/spec#Handling_panics),
but for errors instead of panics.
The current draft design makes error handlers chain lexically,
while `defer` builds up a chain at runtime
depending on what code executes.
This difference matters for handlers (or deferred functions)
declared in conditional bodies and loops.
Although lexical stacking of error handlers seems like a marginally better design,
it may be less surprising to match `defer` exactly.
As an example where `defer`-like handling would be more convenient,
if `CopyFile` established its destination `w` as either `os.Stdout` or the result of `os.Create`,
then it would be helpful to be able to introduce the `os.Remove(dst)` handler conditionally.
**Panics**.
We’ve spent a while trying to harmonize error handling and panics,
so that cleanup due to error handling need not be repeated for cleanup due to panics.
All our attempts at unifying the two only led to more complexity.
**Feedback**.
The most useful general feedback would be examples of interesting uses
that are enabled or disallowed by the draft design.
We’d also welcome feedback about the points above,
especially based on experience with complex
or buggy error handling in real programs.
We are collecting links to feedback at
[golang.org/wiki/Go2ErrorHandlingFeedback](https://golang.org/wiki/Go2ErrorHandlingFeedback).
## Designs in Other Languages
The problem section above briefly discussed C and exception-based languages.
Other recent language designs have also recognized
the problems caused by exception handling’s invisible error checks,
and those designs are worth examining in more detail.
The Go draft design was inspired, at least in part, by each of them.
## Rust
Like Go, [Rust distinguishes](https://doc.rust-lang.org/book/second-edition/ch09-00-error-handling.html)
between expected errors, like "file not found", and unexpected errors,
like accessing past the end of an array.
Expected errors are returned explicitly
while unexpected errors become program-ending panics.
But Rust has little special-purpose
language support for expected errors.
Instead, concise handling of expected errors
is done almost entirely by generics.
In Rust, functions return single values (possibly a single tuple value),
and a function returning a potential error returns a
[discriminated union `Result<T, E>`](https://doc.rust-lang.org/book/second-edition/ch09-02-recoverable-errors-with-result.html)
that is either the successful result of type `T` or an error of type `E`.
enum Result<T, E> {
Ok(T),
Err(E),
}
For example, `fs::File::Open` returns a `Result<fs::File, io::Error>`.
The generic `Result<T, E>` type defines an
[unwrap method](https://doc.rust-lang.org/book/second-edition/ch09-02-recoverable-errors-with-result.html#shortcuts-for-panic-on-error-unwrap-and-expect)
that turns a result into the underlying value (of type `T`)
or else panics (if the result represents an error).
If code does want to check an error instead of panicking,
[the `?` operator](https://doc.rust-lang.org/book/second-edition/ch09-02-recoverable-errors-with-result.html#a-shortcut-for-propagating-errors-the--operator)
macro-expands `use(result?)` into the Rust equivalent of this Go code:
if result.err != nil {
return result.err
}
use(result.value)
The `?` operator therefore helps shorten the error checking
and is very similar to the draft design’s `check`.
But Rust has no equivalent of `handle`:
the convenience of the `?` operator comes with
the likely omission of proper handling.
Rust’s equivalent of Go’s explicit error check `if err != nil` is
[using a `match` statement](https://doc.rust-lang.org/book/second-edition/ch09-02-recoverable-errors-with-result.html),
which is equally verbose.
Rust’s `?` operator began life as
[the `try!` macro](https://doc.rust-lang.org/beta/book/first-edition/error-handling.html#the-real-try-macro).
## Swift
Swift’s `try`, `catch`, and `throw` keywords appear at first glance to be
implementing exception handling, but really they are syntax for explicit error handling.
Each function’s signature specifies whether the function
can result in ("throw") an error.
Here is an [example from the Swift book](https://docs.swift.org/swift-book/LanguageGuide/ErrorHandling.html#ID510):
func canThrowErrors() throws -> String
func cannotThrowErrors() -> String
These are analogous to the Go result lists `(string, error)` and `string`.
Inside a "throws" function, the
`throw` statement returns an error,
as in this [example, again from the Swift book](https://docs.swift.org/swift-book/LanguageGuide/ErrorHandling.html#ID509):
throw VendingMachineError.insufficientFunds(coinsNeeded: 5)
Every call to a "throws" function must specify at the call site
what to do in case of error. In general that means nesting the call
(perhaps along with other calls) inside a [do-catch block](https://docs.swift.org/swift-book/LanguageGuide/ErrorHandling.html#ID541),
with all potentially-throwing calls marked by the `try` keyword:
do {
let s = try canThrowErrors()
let t = cannotThrowErrors()
let u = try canThrowErrors() // a second call
} catch {
handle error from try above
}
The key differences from exception handling as in C++, Java, Python,
and similar languages are:
- Every error check is marked.
- There must be a `catch` or other direction about what to do with an error.
- There is no implicit stack unwinding.
Combined, those differences make all error checking,
handling, and control flow transfers explicit, as in Go.
Swift introduces three shorthands to avoid having
to wrap every throwing function call in a `do`-`catch` block.
First, outside a block, `try canThrowErrors()`
[checks for the error and re-throws it](https://docs.swift.org/swift-book/LanguageGuide/ErrorHandling.html#ID510),
like Rust’s old `try!` macro and current `?` operator.
Second, `try! canThrowErrors()`
[checks for the error and turns it into a runtime assertion failure](https://docs.swift.org/swift-book/LanguageGuide/ErrorHandling.html#ID513),
like Rust’s `.unwrap` method.
Third, `try? canThrowErrors()`
[evaluates to nil on error, or else the function’s result](https://docs.swift.org/swift-book/LanguageGuide/ErrorHandling.html#ID542).
The Swift book gives this example:
func fetchData() -> Data? {
if let data = try? fetchDataFromDisk() { return data }
if let data = try? fetchDataFromServer() { return data }
return nil
}
The example discards the exact reasons these functions failed.
For cleanup, Swift adds lexical [`defer` blocks](https://docs.swift.org/swift-book/LanguageGuide/ErrorHandling.html#ID514),
which run when the enclosing scope is exited, whether by an explicit `return` or by throwing an error.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/cryptography-principles.md | # Cryptography Principles
Author: Filippo Valsorda\
Last updated: June 2019\
Discussion: [golang.org/issue/32466](https://golang.org/issue/32466)
https://golang.org/design/cryptography-principles
The Go cryptography libraries goal is to *help developers build
secure applications*. Thus, they aim to be **secure**, **safe**,
**practical** and **modern**, in roughly that order.
**Secure**. We aim to provide a secure implementation free of
security vulnerabilities.
> This is achieved through reduced complexity, testing, code
> review, and a focus on readability. We will only accept
> changes when there are enough maintainer resources to ensure
> their (ongoing) security.
**Safe**. The goal is to make the libraries easy—not just
possible—to use securely, as library misuse is just as dangerous
to applications as vulnerabilities.
> The default behavior should be safe in as many scenarios as
> possible, and unsafe functionality, if at all available,
> should require explicit acknowledgement in the API.
> Documentation should provide guidance on how to choose and use
> the libraries.
**Practical**. The libraries should provide most developers with
a way to securely and easily do what they are trying to do,
focusing on common use cases to stay minimal.
> The target is applications, not diagnostic or testing tools.
> It’s expected that niche and uncommon needs will be addressed
> by third-party projects. Widely supported functionality is
> preferred to enable interoperability with non-Go applications.
>
> Note that performance, flexibility and compatibility are only
> goals to the extent that they make the libraries useful, not as
> absolute values in themselves.
**Modern**. The libraries should provide the best available
tools for the job, and they should keep up to date with progress
in cryptography engineering.
> If functionality becomes legacy and superseded, it should be
> marked as deprecated and a modern replacement should be
> provided and documented.
>
> Modern doesn’t mean experimental. As the community grows, it’s
> expected that most functionality will be implemented by
> third-party projects first, and that’s ok.
Note that this is an ordered list, from highest to lowest
priority. For example, an insecure implementation or unsafe API
will not be considered, even if it enables more use cases or is
more performant.
---
The Go cryptography libraries are the `crypto/...` and
`golang.org/x/crypto/...` packages in the Go standard library
and subrepos.
The specific criteria for what is considered a common use case,
widely supported or superseded are complex and out of scope for
this document.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/generics-implementation-dictionaries-go1.18.md | # Go 1.18 Implementation of Generics via Dictionaries and Gcshape Stenciling
This document describes the implementation of generics via dictionaries and
gcshape stenciling in Go 1.18. It provides more concrete and up-to-date
information than described in the [Gcshape design document](https://github.com/golang/proposal/blob/master/design/generics-implementation-gcshape.md).
The compiler implementation of generics (after typechecking) focuses mainly on creating instantiations of generic functions and methods that will execute with arguments that have concrete types. In order to avoid creating a different function instantiation for each invocation of a generic function/method with distinct type arguments (which would be pure stenciling), we pass a **dictionary** along with every call to a generic function/method. The [dictionary](https://go.googlesource.com/proposal/+/refs/heads/master/design/generics-implementation-dictionaries.md) provides relevant information about the type arguments that allows a single function instantiation to run correctly for many distinct type arguments.
However, for simplicity (and performance) of implementation, we do not have a single compilation of a generic function/method for all possible type arguments. Instead, we share an instantiation of a generic function/method among sets of type arguments that have the same gcshape.
## Gcshapes
A **gcshape** (or **gcshape grouping**) is a collection of types that can all share the same instantiation of a generic function/method in our implementation when specified as one of the type arguments. So, for example, in the case of a generic function with a single type parameter, we only need one function instantiation for all type arguments in the same [gcshape](https://github.com/golang/proposal/blob/master/design/generics-implementation-gcshape.md) grouping. Similarly, for a method of a generic type with a single type parameter, we only need one instantiation for all type arguments (of the generic type) in the same gcshape grouping. A **gcshape type** is a specific type that we use in the implementation in such an instantiation to fill in for all types of the gcshape grouping.
We are currently implementing gcshapes in a fairly fine-grained manner. Two concrete types are in the same gcshape grouping if and only if they have the same underlying type or they are both pointer types. We are intentionally defining gcshapes such that we don’t ever need to include any operator methods (e.g. the implementation of the “+” operator for a specified type arg) in a dictionary. In particular, fundamentally different built-in types such as `int` and `float64` are never in the same gcshape. Even `int16` and `int32` have distinct operations (notably left and right shift), so we don’t put them in the same gcshape. Similarly, we intend that all types in a gcshape will always implement builtin methods (such as `make` / `new` / `len` ) in the same way. We could include some very closely related built-in types (such as `uint` and `uintptr`) in the same gcshape, but are not currently doing that. This is already implied by our current fine-grain gcshapes, but we also always want an interface type to be in a different gcshape from a non-interface type (even if the non-interface type has the same two-field structure as an interface type). Interface types behave very differently from non-interface types in terms of calling methods, etc.
We currently name each gcshape type based on the unique string representation (as implemented in `types.LinkString`) of its underlying type. We put all shape types in a unique builtin-package “`go.shape`”. For implementation reasons (see next section), we happen to include in the name of a gcshape type the index of the gcshape argument in the type parameter list. So, a type with underlying type “string” would correspond to a gcshape type named “`go.shape.string_0`” or “`go.shape.string_1`”, depending on whether the type is used as the first or second type argument of a generic function or type. All pointer types are named after a single example type `*uint8`, so the names of gcshapes for pointer shapes are `go.shape.*uint8_0`, `go.shape.*uint8_1`, etc.
We refer to an instantiation of a generic function or method for a specific set of shape type arguments as a **shape instantiation**.
## Dictionary Format
Each dictionary is statically defined at compile-time. A dictionary corresponds to a call site in a program where a specific generic function/method is called with a specific set of concrete type arguments. A dictionary is needed whenever a generic function/method is called, regardless if called from a non-generic or generic function/method. A dictionary is currently named after the fully-qualified generic function/method name being called and the names of the concrete type arguments. Two example dictionary names are `main..dict.Map[int,bool]` and `main..dict.mapCons[int,bool].Apply)`. These are the dictionaries needed for a call or reference to `main.Map[int, bool]()` and `rcvr.Apply()`, where `rcvr` has type `main.mapCons[int, bool]`. The dictionary contains the information needed to execute a gcshape-based instantiation of that generic function/method with those concrete type arguments. Dictionaries with the same name are fully de-duped (by some combination of the compiler and the linker).
We can gather information on the expected format of a dictionary by analyzing the shape instantiation of a generic function/method. We analyze an instantiation, instead of the generic function/method itself, because the required dictionary entries can depend on the shape arguments - notably whether a shape argument is an interface type or not. It is important that the instantiation has been “transformed” enough that all implicit interface conversions (`OCONVIFACE`) have been made explicit. Explicit or implicit interface conversions (in particular, conversions to non-empty interfaces) may require an extra entry in the dictionary.
In order to create the dictionary entries, we often need to substitute the shape type arguments with the real type arguments associated with the dictionary. The shape type arguments must therefore be fully distinguishable, even if several of the type arguments happen to have the same shape (e.g. they are both pointer types). Therefore, as mentioned above, we actually add the index of the type parameter to the shape type, so that different type arguments can be fully distinguished correctly.
The types of entries in a dictionary are as follows:
* **The list of the concrete type arguments of the generic function/method**
* Types in the dictionary are always the run-time type descriptor (a pointer to `runtime._type`)
* **The list of all (or needed) derived types**, which appear in or are implicit in some way in the generic function/method, substituted with the concrete type arguments.
* That is, the list of concrete types that are specifically derived from the type parameters of the function/method (e.g. `*T`, `[]T`, `map[K, V]`, etc) and used in some way in the generic function/method.
* We currently use the derived types for several cases where we need the runtime type of an expression. These cases include explicit or implicit conversions to an empty interface, and type assertions and type switches, where the type of the source value is an empty interface.
* The derived type and type argument entries are also used at run time by the debugger to determine the concrete type of arguments and local variables. At compile time, information about the type argument and derived type dictionary entries is emitted with the DWARF info. For each argument or local variable that has a parameterized type, the DWARF info also indicates the dictionary entry that will contain the concrete type of the argument or variable.
* **The list of all sub-dictionaries**:
* A sub-dictionary is needed for a generic function/method call inside a generic function/method, where the type arguments of the inner call depend on the type parameters of the outer function. Sub-dictionaries are similarly needed for function/method values and method expressions that refer to generic functions/methods.
* A sub-dictionary entry points to the normal top-level dictionary that is needed to execute the called function/method with the required type arguments, as substituted using the type arguments of the dictionary of the outer function.
* **Any specific itabs needed for conversion to a specific non-empty interface** from a type param or derived type. There are currently four main cases where we use dictionary-derived itabs. In all cases, the itab must come from the dictionary, since it depends on the type arguments of the current function.
* For all explicit or implicit `OCONVIFACE` calls from a non-interface type to a non-empty interface. The itab is used to create the destination interface.
* For all method calls on a type parameter (which must be to a method in the type parameter’s bound). This method call is implemented as a conversion of the receiver to the type bound interface, and hence is handled similarly to an implicit `OCONVIFACE` call.
* For all type assertions from a non-empty interface to a non-interface type. The itab is needed to implement the type assertion.
* For type switch cases that involve a non-interface type derived from the type params, where the value being switched on has a non-empty interface type. As with type assertions, the itab is needed to implement the type switch.
We have decided that closures in generic functions/methods that reference generic values/types should use the same dictionary as their containing function/method. Therefore, a dictionary for an instantiated function/method should include all the entries needed for all bodies of the closures it contains as well.
The current implementation may have duplicate subdictionary entries and/or duplicate itab entries. The entries can clearly be deduplicated and shared with a bit more work in the implementation. For some unusual cases, there may also be some unused dictionary entries that could be optimized away.
### Non-monomorphisable Functions
Our choice to compute all dictionaries and sub-dictionaries at compile time does mean that there are some programs that we cannot run. We must have a dictionary for each possible instantiation of a generic function/method with specific concrete types. Because we require all dictionaries to be created statically at compile-time, there must be a finite, known set of types that are used for creating function/method instantiations. Therefore, we cannot handle programs that, via recursion of generic functions/methods, can create an unbounded number of distinct types (typically by repeated nesting of a generic type). A typical example is shown in [issue #48018](https://github.com/golang/go/issues/48018). These types of programs are often called **non-monomorphisable**. If we could create dictionaries (and instantiations of generic types) dynamically at run-time, then we might be able to handle some of these cases of non-monomorphisable code.
## Function and method instantiations
A compile-time instantiation of a generic function or method of a generic type is created for a specific set of gcshape type arguments. As mentioned above, we sometimes call such an instantiation a **shape instantiation**. We determine on-the-fly during compilation which shape instantiations need to be created, as described below in “Compiler processing for calls to generic functions and methods”. Given a set of gcshape type arguments, we create an instantiated function or method by substituting the shape type arguments for the corresponding type parameters throughout the function/method body and header. The function body includes any closures contained in the function.
During the substitution, we also “transform” any relevant nodes. The old typechecker (the `typecheck` package) not only determined the type of every node in a function or declaration, but also did a variety of transformations of the code, usually to a more specific node operation, but also to make explicit nodes for any implicit operations (such as conversions). These transformations often cannot be done until the exact type of the operands are known. So, we delay applying these transformations to generic functions during the noding process. Instead, we apply the transforms while doing the type substitution to create an instantiation. A number of these transformations include adding implicit `OCONVIFACE` nodes. It is important that all `OCONVIFACE` nodes are represented explicitly before determining the dictionary format of the instantiation.
When creating an instantiated function/method, we also automatically add a dictionary parameter “.dict” as the first parameter, preceding even the method receiver.
We have a hash table of shape instantiations that have already been created during this package compilation, so we do not need to create the same instantiation repeatedly. Along with the instantiated function itself, we also save some extra information that is needed for the dictionary pass described below. This includes the format of the dictionary associated with the instantiation and other information that is only accessible from the generic function (such as the bounds of the type params) or is hard to access directly from the instantiation body. We compute this extra information (dictionary format, etc.) as the final step of creating an instantiation.
### Naming of functions, methods, and dictionaries
In the compiler, the naming of generic and instantiated functions and methods is as follows:
* generic function - just the name (with no type parameters), such as Max
* instantiated function - the name with the type argument, such as `Max[int]` or `Max[go.shape.int_0]`.
* generic method - the receiver type with the type parameter that is used in the method definition, and the method name, such as `(*value[T]).Set`. (As a reminder, a method cannot have any extra type parameters besides the type parameters of its receiver type.)
* instantiated method - the receiver type with the type argument, and the method name, such as `(*value[int]).Set` or `(*value[go.shape.string_0]).Set`.
Currently, because the compiler is using only dictionaries (never pure stenciling), the only function names that typically appear in the executable are the function and methods instantiated by shape types. Some methods instantiated by concrete types can appear if there are required itabs that must include references to these fully-instantiated methods (see the "Itab dictionary wrappers" section just below)
Dictionaries are named similarly to the associated instantiated function or method, but with “.dict” preprended. So, examples include: `.dict.Max[float64]` and `.dict.(*value[int]).get` . A dictionary is always defined for a concrete set of types, so there are never any type params or shape types in a dictionary name.
The concrete type names that are included in instantiated function and method names, as well as dictionary names, are fully-specified (including the package name, if not the builtin package). Therefore, the instantiated function, instantiated method, and dictionary names are uniquely specified. Therefore, they can be generated on demand in any package, as needed, and multiple instances of the same function, method, or dictionary will automatically be de-duplicated by the linker.
### Itab dictionary wrappers
For direct calls of generic functions or methods of generic types, the compiler automatically adds an extra initial argument, which is the required dictionary, when calling the appropriate shape instantiation. That dictionary may be either a reference to a static dictionary (if the concrete types are statically known) or to a sub-dictionary of the containing function’s dictionary. If a function value, method value, or method expression is created, then the compiler will automatically create a closure that calls the appropriate shape instantiation with the correct dictionary when the function or method value or method expression is called. A similar closure wrapper is needed when generating each entry of the itab of a fully-instantiated generic type, since an itab entry must be a function that takes the appropriate receiver and other arguments, but no dictionary.
## Compiler processing for calls to generic functions and methods
Most of the generics-specific processing happens in the front-end of the compiler.
* Types2 typechecker (new) - the types2-typechecker is a new typechecker which can do complete validation and typechecking of generic programs. It is written to be independent of the rest of the compiler, and passes the typechecking information that it computes to the rest of the compiler in a set of maps.
* Noder pass (pre-existing, but completely rewritten to use the type2 typechecker information) - the noder pass creates the ir.Node representation of all functions/methods in the current package. We create node representations for both generic and non-generic functions. We use information from the types2-typechecker to set the type of each Node. Various nodes in generic functions may have types that depend on the type parameters. For non-generic functions, we do the normal transformations associated with the old typechecker, as mentioned above. We do not do the transformations for generic functions, since many of the transformations are dependent on concrete type information.
During noding, we record each fully-instantiated non-interface type that already exists in the source code. For example, any function (generic or non-generic) might happen to specify a variable of type ‘`List[int]`’. We do the same thing when importing a needed function body (either because it is a generic function that will be instantiated or because it is needed for inlining).
The body of an exportable generic function is always exported, since an exported generic function may be called and hence need to be instantiated in any other package in which it is referenced. Similarly, the bodies of the methods of an exportable generic type are also always exported, since we need to instantiate these methods whenever the generic type is instantiated. Unexported generic functions and types may need to be exported if they are referenced by an inlinable function (see `crawler.go`)
* Scan pass (new) - a pass over all non-generic functions and instantiated functions that looks for references to generic functions/methods. At any such reference, it creates the required shape instantiation (if not yet created during the current compilation) and transforms the reference to use the shape instantiation and pass in the appropriate dictionary. The scan pass is executed repeatedly over all newly created instantiated functions/methods, until there are no more instantiations that have been created.
* At the beginning of each iteration of the scan pass, we create all the instantiated methods and dictionaries needed for each fully-instantiated type that has been seen since the last iteration of the scan pass (or from the noder pass, in the case of the first iteration of the scan pass). This ensures that the required method instantiations will be available when creating runtime type descriptors and itabs, including the itabs needed in dictionaries.
* For each reference to a generic function/method in a function being scanned, we determine the GC shapes of the type arguments. If we haven’t already created the needed instantiation with those shape arguments, we create the instantiation by doing a substitution of types on the generic function header and body. The generic function may be from another package, in which case we need to import its function body. Once we have created the instantiation, we can then determine the format of the associated dictionary. We replace the reference to the generic function/method with a call (possibly in a closure) to the required instantiation with the required dictionary argument. If the reference is in a non-generic function, then the required dictionary argument will be a top-level static dictionary. If the reference is in a shape instantiation, then the dictionary argument will be a sub-dictionary entry from the dictionary of the containing function. We compute top-level dictionaries (and all their required sub-dictionaries, recursively) on demand as needed using the dictionary format information.
* As with the noder pass, we record any new fully-instantiated non-interface type that is created. In the case of the scan pass, this type will be created because of type substitution. Typically, it will be for dictionary entries for derived types. If we were doing pure stenciling in some cases, then it would happen analogously when creating the concrete types in a purely stenciled function (no dictionaries).
* Dictionary pass (new) - a pass over all instantiated functions/methods that transforms operations that require a dictionary entry. These operations include calls to a method of a type parameter’s bound, conversion of a parameterized type to an interface, and type assertions and type switches on a parameterized type. This pass must be separate (after the scan pass), since we must determine the dictionary format for the instantiation before doing any of these transformations. The dictionary pass typically transforms these operations to access a specific entry in the dictionary (which is either a runtime type or an itab) and then use that entry in a specific way.
There is an interesting phase ordering problem with respect to inlining. Currently, we try to do all of the processing for generics right after noding, so there is minimal effect on the rest of the compiler. We have mostly succeeded - after the dictionary pass, instantiated functions are treated as normal type-checked code and can be further processed and optimized normally. However, the inlining pass can introduce new code via a newly inlined function, and that new code may reference a variable with a new instantiated type and call methods on that variable or store the variable in an interface. So, we may potentially need to create new instantiations during the inlining pass.
However, we can avoid the phase ordering problem if, when we export the body of an inlineable function that references an instantiated type I, we also export any needed information related to type I. That way, we will have the necessary information during inlining in a new package without fully re-creating the instantiated type I. One approach would be to fully export such a fully-instantiated type I. But that approach is overly complicated and changes the export format in an ugly way. The approach that works out most cleanly (and that we used) is to just export the shape instantiations and dictionaries needed for the methods of I. The type I and the wrappers for the methods of I will be re-created (and de-duped) on the importing side, but there will be no need for any extra instantiation pass (to create shape instantiations or dictionaries), since the needed instantiations and dictionaries will already be available for import.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/26903-simplify-mark-termination.md | # Proposal: Simplify mark termination and eliminate mark 2
Author(s): Austin Clements
Last updated: 2018-08-09
Discussion at https://golang.org/issue/26903.
## Abstract
Go's garbage collector has evolved substantially over time, and as
with any software with a history, there are places where the vestigial
remnants of this evolution show.
This document proposes several related simplifications to the design
of Go's mark termination and related parts of concurrent marking that
were made possible by shifts in other parts of the garbage collector.
The keystone of these simplifications is a new mark completion
algorithm.
The current algorithm is racy and, as a result, mark termination must
cope with the possibility that there may still be marking work to do.
We propose a new algorithm based on distributed termination detection
that both eliminates this race and replaces the existing "mark 2"
sub-phase, yielding simplifications throughout concurrent mark and
mark termination.
This new mark completion algorithm combined with a few smaller changes
can simplify or completely eliminate several other parts of the
garbage collector. Hence, we propose to also:
1. Unify stop-the-world GC and checkmark mode with concurrent marking;
2. Flush mcaches after mark termination;
3. And allow safe-points without preemption in dedicated workers.
Taken together, these fairly small changes will allow us to eliminate
mark 2, "blacken promptly" mode, the second root marking pass,
blocking drain mode, `getfull` and its troublesome spin loop, work
queue draining during mark termination, `gchelper`, and idle worker
tracking.
This will eliminate a good deal of subtle code from the garbage
collector, making it simpler and more maintainable.
As an added bonus, it's likely to perform a little better, too.
## Background
Prior to Go 1.5, Go's garbage collector was a stop-the-world garbage
collector, and Go continues to support STW garbage collection as a
debugging mode.
Go 1.5 introduced a concurrent collector, but, in order to minimize a
massively invasive change, it kept much of the existing GC mechanism
as a STW "mark termination" phase, while adding a concurrent mark
phase before the STW phase.
This concurrent mark phase did as much as it could, but ultimately
fell back to the STW algorithm to clean up any work it left behind.
Until Go 1.8, concurrent marking always left behind at least some
work, since any stacks that had been modified during the concurrent
mark phase had to be re-scanned during mark termination.
Go 1.8 introduced [a new write barrier](17503-eliminate-rescan.md)
that eliminated the need to re-scan stacks.
This significantly reduced the amount of work that had to be done in
mark termination.
However, since it had never really mattered before, we were sloppy
about entering mark termination: the algorithm that decided when it
was time to enter mark termination *usually* waited until all work was
done by concurrent mark, but sometimes [work would slip
through](17503-eliminate-rescan.md#appendix-mark-completion-race),
leaving mark termination to clean up the mess.
Furthermore, in order to minimize (though not eliminate) the chance of
entering mark termination prematurely, Go 1.5 divided concurrent
marking into two phases creatively named "mark 1" and "mark 2".
During mark 1, when it ran out of global marking work, it would flush
and disable all local work caches (enabling "blacken promptly" mode)
and enter mark 2.
During mark 2, when it ran out of global marking work again, it would
enter mark termination.
Unfortunately, blacken promptly mode has performance implications
(there was a reason for those local caches), and this algorithm can
enter mark 2 very early in the GC cycle since it merely detects a work
bottleneck.
And while disabling all local caches was intended to prevent premature
mark termination, this doesn't always work.
## Proposal
There are several steps to this proposal, and it's not necessary to
implement all of them.
However, the crux of the proposal is a new termination detection
algorithm.
### Replace mark 2 with a race-free algorithm
We propose replacing mark 2 with a race-free algorithm based on ideas
from distributed termination detection [Matocha '97].
The GC maintains several work queues of grey objects to be blackened.
It maintains two global queues, one for root marking work and one for
heap objects, but we can think of these as a single logical queue.
It also maintains a queue of locally cached work on each *P* (that is,
each GC worker).
A P can move work from the global queue to its local queue or
vice-versa.
Scanning removes work from the local queue and may add work back to
the local queue.
This algorithm does not change the structure of the GC's work queues
from the current implementation.
A P *cannot* observe or remove work from another P's local queue.
A P also *cannot* create work from nothing: it must consume a marking
job in order to create more marking jobs.
This is critical to termination detection because it means termination
is a stable condition.
Furthermore, all of these actions must be *GC-atomic*; that is, there
are no safe-points within each of these actions.
Again, all of this is true of the current implementation.
The proposed algorithm is as follows:
First, each P, maintains a local *flushed* flag that it sets whenever
the P flushes any local GC work to the global queue.
The P may cache an arbitrary amount of GC work locally without setting
this flag; the flag indicates that it may have shared work with
another P.
This flag is only accessed synchronously, so it need not be atomic.
When a P's local queue is empty and the global queue is empty it runs
the termination detection algorithm:
1. Acquire a global termination detection lock (only one P may run
this algorithm at a time).
2. Check the global queue. If it is non-empty, we have not reached
termination, so abort the algorithm.
3. Execute a ragged barrier. On each P, when it reaches a safe-point,
1. Flush the local write barrier buffer.
This may mark objects and add pointers to the local work queue.
2. Flush the local work queue.
This may set the P's flushed flag.
3. Check and clear the P's flushed flag.
4. If any P's flushed flag was set, we have not reached termination,
so abort the algorithm.
If no P's flushed flag was set, enter mark termination.
Like most wave-based distributed termination algorithms, it may be
necessary to run this algorithm multiple times during a cycle.
However, this isn't necessarily a disadvantage: flushing the local
work queues also serves to balance work between Ps, and makes it okay
to keep work cached on a P that isn't actively doing GC work.
There are a few subtleties to this algorithm that are worth noting.
First, unlike many distributed termination algorithms, it *does not*
detect that no work was done since the previous barrier.
It detects that no work was *communicated*, and that all queues were
empty at some point.
As a result, while many similar algorithms require at least two waves
to detect termination [Hudson '97], this algorithm can detect
termination in a single wave.
For example, on small heaps it's possible for all Ps to work entirely
out of their local queues, in which case mark can complete after just
a single wave.
Second, while the only way to add work to the local work queue is by
consuming work, this is not true of the local write barrier buffer.
Since this buffer simply records pointer writes, and the recorded
objects may already be black, it can continue to grow after
termination has been detected.
However, once termination is detected, we know that all pointers in
the write barrier buffer must be to black objects, so this buffer can
simply be discarded.
#### Variations on the basic algorithm
There are several small variations on the basic algorithm that may be
desirable for implementation and efficiency reasons.
When flushing the local work queues during the ragged barrier, it may
be valuable to break up the work buffers that are put on the global
queue.
For efficiency, work is tracked in batches and the queues track these
batches, rather than individual marking jobs.
The ragged barrier is an excellent opportunity to break up these
batches to better balance work.
The ragged barrier places no constraints on the order in which Ps
flush, nor does it need to run on all Ps if some P has its local
flushed flag set.
One obvious optimization this allows is for the P that triggers
termination detection to flush its own queues and check its own
flushed flag before trying to interrupt other Ps.
If its own flushed flag is set, it can simply clear it and abort (or
retry) termination detection.
#### Consequences
This new termination detection algorithm replaces mark 2, which means
we no longer need blacken-promptly mode.
Hence, we can delete all code related to blacken-promptly mode.
It also eliminates the mark termination race, so, in concurrent mode,
mark termination no longer needs to detect this race and behave
differently.
However, we should probably continue to detect the race and panic, as
detecting the race is cheap and this is an excellent self-check.
### Unify STW GC and checkmark mode with concurrent marking
The next step in this proposal is to unify stop-the-world GC and
checkmark mode with concurrent marking.
Because of the GC's heritage from a STW collector, there are several
code paths that are specific to STW collection, even though STW is
only a debugging option at this point.
In fact, as we've made the collector more concurrent, more code paths
have become vestigial, existing only to support STW mode.
This adds complexity to the garbage collector and makes this debugging
mode less reliable (and less useful) as these code paths are poorly
tested.
We propose instead implementing STW collection by reusing the existing
concurrent collector, but simply telling the scheduler that all Ps
must run "dedicated GC workers".
Hence, while the world won't technically be stopped during marking, it
will effectively be stopped.
#### Consequences
Unifying STW into concurrent marking directly eliminates several code
paths specific to STW mode.
Most notably, concurrent marking currently has two root marking phases
and STW mode has a single root marking pass.
All three of these passes must behave differently.
Unifying STW and concurrent marking collapses all three passes into
one.
In conjunction with the new termination detection algorithm, this
eliminates the need for mark work draining during mark termination.
As a result, the write barrier does not need to be on during mark
termination, and we can eliminate blocking drain mode entirely.
Currently, blocking drain mode is only used if the mark termination
race happens or if we're in STW mode.
This in turn eliminates the troublesome spin loop in `getfull` that
implements blocking drain mode.
Specifically, this eliminates `work.helperDrainBlock`, `gcDrainBlock`
mode, `gcWork.get`, and `getfull`.
At this point, the `gcMark` function should be renamed, since it will
no longer have anything to do with marking.
Unfortunately, this isn't enough to eliminate work draining entirely
from mark termination, since the draining mechanism is also used to
flush mcaches during mark termination.
### Flush mcaches after mark termination
The third step in this proposal is to delay the flushing of mcaches
until after mark termination.
Each P has an mcache that tracks spans being actively allocated from
by that P.
Sweeping happens when a P brings a span into its mcache, or can happen
asynchronously as part of background sweeping.
Hence, the spans in mcaches must be flushed out in order to trigger
sweeping of those spans and to prevent a race between allocating from
an unswept span and the background sweeper sweeping that span.
While it's important for the mcaches to be flushed between enabling
the sweeper and allocating, it does not have to happen during mark
termination.
Hence, we propose to flush each P's mcache when that P returns from
the mark termination STW.
This is early enough to ensure no allocation can happen on that P,
parallelizes this flushing, and doesn't block other Ps during
flushing.
Combined with the first two steps, this eliminates the only remaining
use of work draining during mark termination, so we can eliminate mark
termination draining entirely, including `gchelper` and related
mechanisms (`mhelpgc`, `m.helpgc`, `helpgc`, `gcprocs`,
`needaddgcproc`, etc).
### Allow safe-points without preemption in dedicated workers
The final step of this proposal is to allow safe-points in dedicated
GC workers.
Currently, dedicated GC workers only reach a safe-point when there is
no more local or global work.
However, this interferes with the ragged barrier in the termination
detection algorithm (which can only run at a safe-point on each P).
As a result, it's only fruitful to run the termination detection
algorithm if there are no dedicated workers running, which in turn
requires tracking the number of running and idle workers, and may
delay work balancing.
By allowing more frequent safe-points in dedicated GC workers,
termination detection can run more eagerly.
Furthermore, worker tracking was based on the mechanism used by STW GC
to implement the `getfull` barrier.
Once that has also been eliminated, we no longer need any worker
tracking.
## Proof of termination detection algorithm
The proposed termination detection algorithm is remarkably simple to
implement, but subtle in its reasoning.
Here we prove it correct and endeavor to provide some insight into why
it works.
**Theorem.** The termination detection algorithm succeeds only if all
mark work queues are empty when the algorithm terminates.
**Proof.** Assume the termination detection algorithm succeeds.
In order to show that all mark work queues must be empty once the
algorithm succeeds, we use induction to show that all possible actions
must maintain three conditions: 1) the global queue is empty, 2) all
flushed flags are clear, and 3) after a P has been visited by the
ragged barrier, its local queue is empty.
First, we show that these conditions were true at the instant it
observed the global queue was empty.
This point in time trivially satisfies condition 1.
Since the algorithm succeeded, each P's flushed flag must have been
clear when the ragged barrier observed that P.
Because termination detection is the only operation that clears the
flushed flags, each flag must have been clear for all time between the
start of termination detection and when the ragged barrier observed
the flag.
In particular, all flags must have been clear at the instant it
observed that the global queue was empty, so condition 2 is satisfied.
Condition 3 is trivially satisfied at this point because no Ps have
been visited by the ragged barrier.
This establishes the base case for induction.
Next, we consider all possible actions that could affect the state of
the queue or the flags after this initial state.
There are four such actions:
1. The ragged barrier can visit a P.
This may modify the global queue, but if it does so it will set the
flushed flag and the algorithm will not succeed, contradicting the
assumption.
Thus it could not have modified the global queue, maintaining
condition 1.
For the same reason, we know it did not set the flushed flag,
maintaining condition 2.
Finally, the ragged barrier adds the P to the set of visited P, but
flushes the P's local queue, thus maintaining condition 3.
2. If the global queue is non-empty, a P can move work from the global
queue to its local queue.
By assumption, the global queue is empty, so this action can't
happen.
3. If its local queue is non-empty, a P can consume local work and
potentially produce local work.
This action does not modify the global queue or flushed flag, so it
maintains conditions 1 and 2.
If the P has not been visited by the ragged barrier, then condition
3 is trivially maintained.
If it has been visited, then by assumption the P's local queue is
empty, so this action can't happen.
4. If the local queue is non-empty, the P can move work from the local
queue to the global queue.
There are two sub-cases.
If the P has not been visited by the ragged barrier, then this
action would set the P's flushed flag, causing termination
detection to fail, which contradicts the assumption.
If the P has been visited by the ragged barrier, then its local
queue is empty, so this action can't happen.
Therefore, by induction, all three conditions must be true when
termination detection succeeds.
Notably, we've shown that once the ragged barrier is complete, none of
the per-P actions (2, 3, and 4) can happen.
Thus, if termination detection succeeds, then by conditions 1 and 3,
all mark work queues must be empty.
**Corollary.** Once the termination detection algorithm succeeds,
there will be no work to do in mark termination.
Go's GC never turns a black object grey because it uses black mutator
techniques (once a stack is black it remains black) and a
forward-progress barrier.
Since the mark work queues contain pointers to grey objects, it
follows that once the mark work queues are empty, they will remain
empty, including when the garbage collector transitions in to mark
termination.
## Compatibility
This proposal does not affect any user-visible APIs, so it is Go 1
compatible.
## Implementation
This proposal can be implemented incrementally, and each step opens up
new simplifications.
The first step will be to implement the new termination detection
algorithm, since all other simplifications build on that, but the
other steps can be implemented as convenient.
Austin Clements plans to implement all or most of this proposal for Go
1.12.
The actual implementation effort for each step is likely to be fairly
small (the mark termination algorithm was implemented and debugged in
under an hour).
## References
[Hudson '97] R. L. Hudson, R. Morrison, J. E. B. Moss, and D. S.
Munro. Garbage collecting the world: One car at a time. In *ACM
SIGPLAN Notices* 32(10):162–175, October 1997.
[Matocha '98] Jeff Matocha and Tracy Camp. "A taxonomy of distributed
termination detection algorithms." In *Journal of Systems and
Software* 43(3):207–221, November 1998.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/TEMPLATE.md | [This is a template for Go's change proposal process, documented [here](../README.md).]
# Proposal: [Title]
Author(s): [Author Name, Co-Author Name]
Last updated: [Date]
Discussion at https://go.dev/issue/NNNNN.
## Abstract
[A short summary of the proposal.]
## Background
[An introduction of the necessary background and the problem being solved by the proposed change.]
## Proposal
[A precise statement of the proposed change.]
## Rationale
[A discussion of alternate approaches and the trade offs, advantages, and disadvantages of the specified approach.]
## Compatibility
[A discussion of the change with regard to the
[compatibility guidelines](https://go.dev/doc/go1compat).]
## Implementation
[A description of the steps in the implementation, who will do them, and when.
This should include a discussion of how the work fits into [Go's release cycle](https://go.dev/wiki/Go-Release-Cycle).]
## Open issues (if applicable)
[A discussion of issues relating to this proposal for which the author does not
know the solution. This section may be omitted if there are none.]
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/27539-internal-abi.md | # Proposal: Create an undefined internal calling convention
Author(s): Austin Clements
Last updated: 2019-01-14
Discussion at https://golang.org/issue/27539.
## Abstract
Go's current calling convention interferes with several significant
optimizations, such as [register
passing](https://golang.org/issue/18597) (a potential 5% win).
Despite the obvious appeal of these optimizations, we've encountered
significant roadblocks to their implementation.
While Go's calling convention isn't covered by the [Go 1 compatibility
promise](https://golang.org/doc/go1compat), it's impossible to write
Go assembly code without depending on it, and there are many important
packages that use Go assembly.
As a result, much of Go's calling convention is effectively public and
must be maintained in a backwards-compatible way.
We propose a way forward based on having multiple calling conventions.
We propose maintaining the existing calling convention and introducing
a new, private calling convention that is explicitly not
backwards-compatible and not accessible to assembly code, with a
mechanism to keep different calling convention transparently
inter-operable.
This same mechanism can be used to introduce other public, stable
calling conventions in the future, but the details of that are outside
the scope of this proposal.
This proposal is *not* about any specific new calling convention.
It's about *enabling* new calling conventions to work in the existing
Go ecosystem.
This is one step in a longer-term plan.
## Background
Language environments depend on *application binary interfaces* (ABIs)
to define the machine-level conventions for operating within that
environment.
One key aspect of an ABI is the *calling convention*, which defines
how function calls in the language operate at a machine-code level.
Go's calling convention specifies how functions pass argument values
and results (on the stack), which registers have fixed functions
(e.g., R10 on ARM is the "g" register) or may be clobbered by a call
(all non-fixed function registers), and how to interact with stack
growth, the scheduler, and the garbage collector.
Go's calling convention as of Go 1.11 is simple and nearly universal
across platforms, but also inefficient and inflexible.
It is rife with opportunities for improving performance.
For example, experiments with [passing arguments and results in
registers](https://golang.org/issue/18597) suggest a 5% performance
win.
Propagating register clobbers up the call graph could avoid
unnecessary stack spills.
Keeping the stack bound in a fixed register could eliminate two
dependent memory loads on every function entry on x86.
Passing dynamic allocation scopes could reduce heap allocations.
And yet, even though the calling convention is invisible to Go
programs, almost every substantive change we've attempted has been
stymied because changes break existing Go *assembly* code.
While there's relatively little Go assembly (roughly 170 kLOC in
public GitHub repositories<sup>*</sup>), it tends to lie at the heart
of important packages like crypto and numerical libraries.
This proposal operates within two key constraints:
1. We can't break existing assembly code, even though it isn't
technically covered by Go 1 compatibility.
There's too much of it and it's too important.
Hence, we can't change the calling convention used by existing
assembly code.
2. We can't depend on a transition periods after which existing
assembly would break.
Too much code simply doesn't get updated, or if it does, it doesn't
get re-vendored.
Hence, it's not enough to give people a transition path to a new
calling convention and some time.
Existing code must continue to work.
This proposal resolves this tension by introducing multiple calling
conventions.
Initially, we propose two: one is stable, documented, and codifies the
rules of the current calling convention; the other is unstable,
internal, and may change from release to release.
<sup>*</sup> This counts non-comment, non-whitespace lines of code in
unique files. It excludes vendored source and source with a "Go
Authors" copyright notice.
## Proposal
We propose introducing a second calling convention.
* `ABI0` is the current calling convention, which passes arguments and
results on the stack, clobbers all registers on calls, and has a few
platform-dependent fixed registers.
* `ABIInternal` is unstable and may change from release to release.
Initially, it will be identical to `ABI0`, but `ABIInternal` opens
the door for changes.
Once we're happy with `ABIInternal`, we may "snapshot" it as a new
stable `ABI1`, allowing assembly code to be written against the
presumably faster, new calling convention.
This would not eliminate `ABIInternal`, as `ABIInternal` could later
diverge from `ABI1`, though `ABI1` and `ABIInternal` may be identical
for some time.
A text symbol can provide different definitions for different ABIs.
One of these will be the "native" implementation—`ABIInternal` for
functions defined in Go and `ABI0` for functions defined in
assembly—while the others will be "ABI wrappers" that simply translate
to the ABI of the native implementation and call it.
In the linker, each symbol is already identified with a (name,
version) pair.
The implementation will simply map ABIs to linker symbol versions.
All functions defined in Go will be natively `ABIInternal`, and the Go
compiler will assume all functions provide an `ABIInternal`
implementation.
Hence, all cross-package calls and all indirect calls (closure calls
and interface method calls) will use `ABIInternal`.
If the native implementation of the called function is `ABI0`, this
will call a wrapper, which will call the `ABI0` implementation.
For direct calls, if the compiler knows the target is a native `ABI0`
function, it can optimize that call to use `ABI0` directly, but this
is strictly an optimization.
All functions defined in assembly will be natively `ABI0`, and all
references to text symbols from assembly will use the `ABI0`
definition.
To introduce another stable ABI in the future, we would extend the
assembly symbol syntax with a way to specify the ABI, but `ABI0` must
be assumed for all unqualified symbols for backwards compatibility.
In order to transparently bridge the two (or more) ABIs, we will
extend the assembler with a mode to scan for all text symbol
definitions and references in assembly code, and report these to the
compiler.
When these symbols are referenced or defined, respectively, from Go
code in the same package, the compiler will use the type information
available in Go declarations and function stubs to produce the
necessary ABI wrapper definitions.
The linker will check that all symbol references use the correct ABI
and ultimately keep everything honest.
## Rationale
The above approach allows us to introduce an internal calling
convention without any modifications to any safe Go code, or the vast
majority of assembly-using packages.
This is largely afforded by the extra build step that scans for
assembly symbol definitions and references.
There are two major trade-off axes that lead to different designs.
### Implicit vs explicit
Rather than implicitly scanning assembly code for symbol definitions
and references, we could instead introduce pragma comments that users
could use to explicitly inform the compiler of symbol ABIs.
This would make these ABI boundaries evident in code, but would likely
break many more existing packages.
In order to keep any assembly-using packages working as-is, this
approach would need default rules.
For example, body-less function stubs would likely need to default to
`ABI0`.
Any Go functions called from assembly would still need explicit
annotations, though such calls are rare.
This would cover most assembly-using packages, but function stubs are
also used for Go symbols pushed across package boundaries using
`//go:linkname`.
For link-named symbols, a pragma would be necessary to undo the
default `ABI0` behavior, and would depend on how the target function
was implemented.
Ultimately, there's no set of default rules that keeps all existing
code working.
Hence, this design proposes extracting symbols from assembly source to
derive the correct ABIs in the vast majority of cases.
### Wrappers vs single implementation
In this proposal, a single function can provide multiple entry-points
for different calling conventions.
One of these is the "native" implementation and the others are
intended to translate the calling convention and then invoke the
native implementation.
An alternative would be for each function to provide a single calling
convention and require all calls to that function to follow that
calling convention.
Other languages use this approach, such as C (e.g.,
`fastcall`/`stdcall`/`cdecl`) and Rust (`extern "C"`, etc).
This works well for direct calls, but for direct calls it's also
possible to compile away this proposal's ABI wrapper.
However, it dramatically complicates indirect calls since it requires
the calling convention to become *part of the type*.
Hence, in Go, we would either have to extend the type system, or
declare that only `ABIInternal` functions can be used in closures and
interface satisfaction, both of which are less than ideal.
Using ABI wrappers has the added advantage that calls to a Go function
from Go can use the fastest available ABI, while still allowing calls
via the stable ABI from assembly.
### When to generate wrappers
Finally, there's flexibility in this design around when exactly to
generate ABI wrappers.
In the current proposal, ABI wrappers are always generated in the
package where both the definition and the reference to a symbol
appear.
However, ABI wrappers can be generated anywhere Go type information is
available.
For example, the compiler could generate an `ABIInternal`→`ABI0`
wrapper when an `ABI0` function is stored in a closure or method
table, regardless of which package that happens in.
And the compiler could generate an `ABI0`→`ABIInternal` wrapper when
it encounters an `ABI0` reference from assembly by finding the
function's type either in the current package or via export info from
another package.
## Compatibility
This proposed change does not affect the functioning of any safe Go
code.
It can affect code that goes outside the [compatibility
guidelines](https://golang.org/doc/go1compat), but is designed to
minimize this impact.
Specifically:
1. Unsafe Go code can observe the calling convention, though doing so
requires violating even the [allowed uses of
unsafe.Pointer](https://golang.org/pkg/unsafe/#Pointer).
This does arise in the internal implementation of the runtime and
in cgo, both of which will have to be adjusted when we actually
change the calling convention.
2. Cross-package references where the definition and the reference are
different ABIs may no longer link.
There are various ways to form cross-package references in Go, though
all depends on `//go:linkname` (which is explicitly unsafe) or
complicated assembly symbol naming.
Specifically, the following types of cross-package references may no
longer link:
<table>
<thead>
<tr>
<th colspan="2" rowspan="2"></th>
<th colspan="4">def</th>
</tr>
<tr>
<th>Go</th>
<th>Go+push</th>
<th>asm</th>
<th>asm+push</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="4">ref</th>
<th>Go</th> <td>✓</td><td>✓</td><td>✓</td><td>✗¹</td>
</tr>
<tr>
<th>Go+pull</th> <td>✓</td><td>✓</td><td>✗¹</td><td>✗¹</td>
</tr>
<tr>
<th>asm</th> <td>✓</td><td>✗²</td><td>✓</td><td>✓</td>
</tr>
<tr>
<th>asm+xref</th><td>✗²</td><td>✗²</td><td>✓</td><td>✓</td>
</tr>
</tbody></table>
In this table "push" refers to a symbol that is implemented in one
package, but its symbol name places it in a different package.
In Go this is accomplished with `//go:linkname` and in assembly this
is accomplished by explicitly specifying the package in a symbol name.
There are a total of two instances of "asm+push" on all of public
GitHub, both of which are already broken under current rules.
"Go+pull" refers to when an unexported symbol defined in one package
is referenced from another package via `//go:linkname`.
"asm+xref" refers to any cross-package symbol reference from assembly.
The vast majority of "asm+xref" references in public GitHub
repositories are to a small set of runtime package functions like
`entersyscall`, `exitsyscall`, and `memmove`.
These are serious abstraction violations, but they're also easy to
keep working.
There are two general groups of link failures in the above table,
indicated by superscripts.
In group 1, the compiler will create an `ABIInternal` reference to a
symbol that may only provide an `ABI0` implementation.
This can be worked-around by ensuring there's a Go function stub for
the symbol in the defining package.
For "asm" definitions this is usually the case anyway, and "asm+push"
definitions do not happen in practice outside the runtime.
In all of these cases, type information is available at the reference
site, so the compiler could record assembly ABI definitions in the
export info and produce the stubs in the referencing package, assuming
the defining package is imported.
In group 2, the assembler will create an `ABI0` reference to a symbol
that may only provide an `ABIInternal` implementation.
In general, calls from assembly to Go are quite rare because they
require either stack maps for the assembly code, or for the Go
function and everything it calls recursively to be `//go:nosplit`
(which is, in general, not possible to guarantee because of
compiler-inserted calls).
This can be worked-around by creating a dummy reference from assembly
in the defining package.
For "asm+xref" references to exported symbols, it would be possible to
address this transparently by using export info to construct the ABI
wrapper when compiling the referer package, again assuming the
defining package is imported.
The situations that cause these link failures are vanishingly rare in
public code corpora (outside of the standard library itself), all
depend on unsafe code, and all have reasonable workarounds.
Hence, we conclude that the potential compatibility issues created by
this proposal are worth the upsides.
### Calling runtime.panic* from assembly
One compatibility issue we found in public GitHub repositories was
references from assembly to `runtime.panic*` functions.
These calls to an unexported function are an obvious violation of
modularity, but also a violation of the Go ABI because the callers
invariably lack a stack map.
If a stack growth or GC were to happen during this call, it would
result in a fatal panic.
In these cases, we recommend wrapping the assembly function in a Go
function that performs the necessary checks and then calls the
assembly function.
Typically, this Go function will be inlined into its caller, so this
will not introduce additional call overhead.
For example, take a function that computes the pair-wise sums of two
slices and requires its arguments to be the same length:
```asm
// func AddVecs(x, y []float64)
TEXT ·AddVecs(SB), NOSPLIT, $16
// ... check lengths, put panic message on stack ...
CALL runtime·panic(SB)
```
This should instead be written as a Go function that uses language
facilities to panic, followed by a call to the assembly
implementation that implements the operation:
```go
func AddVecs(x, y []float64) {
if len(x) != len(y) {
panic("slices must be the same length")
}
addVecsAsm(x, y)
}
```
In this example, `AddVecs` is small enough that it will be inlined, so
there's no additional overhead.
## Implementation
Austin Clements will implement this proposal for Go 1.12.
This will allow the ABI split to soak for a release while the two
calling conventions are in fact identical.
Assuming that goes well, we can move on to changing the internal
calling convention in Go 1.13.
Since both calling conventions will initially be identical, the
implementation will initially use "ABI aliases" rather than full ABI
wrappers.
ABI aliases will be fully resolved by the Go linker, so in the final
binary every symbol will still have one implementation and all calls
(regardless of call ABI) will resolve to that implementation.
The rough implementation steps are as follows:
1. Reserve space in the linker's symbol version numbering to represent
symbol ABIs.
Currently, all non-static symbols have version 0, so any linker
code that depends on this will need to be updated.
2. Add a `-gensymabis` flag to `cmd/asm` that scans assembly sources
for text symbol definitions and references and produces a "symbol
ABIs" file rather than assembling the code.
3. Add a `-symabis` flag to `cmd/compile` that accepts this symbol
ABIs file.
4. Update `cmd/go`, `cmd/dist`, and any other mini-build systems in
the standard tree to invoke `asm` in `-gensymabis` mode and feed
the result to `compile`.
5. Add support for recording symbol ABIs and ABI alias symbols to the
object file format.
6. Modify `cmd/link` to resolve ABI aliases.
7. Modify `cmd/compile` to produce `ABIInternal` symbols for all Go
functions, produce `ABIInternal`→`ABI0` ABI aliases for Go
functions referenced from assembly, and produce
`ABI0`→`ABIInternal` ABI aliases for assembly functions referenced
from Go.
Once we're ready to modify the internal calling convention, the first
step will be to produce actual ABI wrappers.
We'll then likely want to start with a simple change, such as putting
the stack bound in a fixed register.
## Open issues
There are a few open issues in this proposal.
1. How should tools that render symbols from object files (e.g., `nm`
and `objdump`) display symbol ABIs?
With ABI aliases, there's little need to show this (though it can
affect how a symbol is resolved), but with full ABI wrappers it
will become more pressing.
Ideally this would be done in a way that doesn't significantly
clutter the output.
2. How do we represent symbols with different ABI entry-points in
platform object files, particularly in shared objects?
In the initial implementation using ABI aliases, we can simply
erase the ABI.
It may be that we need to use minor name mangling to encode the
symbol ABI in its name (though this does not have to affect the Go
symbol name).
3. How should ABI wrappers and `go:nosplit` interact?
In general, the wrapper needs to be `go:nosplit` if and only if the
wrapped function is `go:nosplit`.
However, for assembly functions, the wrapper is generated by the
compiler and the compiler doesn't currently know whether the
assembly function is `go:nosplit`.
It could conservatively make wrappers for assembly functions
`go:nosplit`, or the toolchain could include that information in
the symabis file.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/draft-fuzzing.md | # Design Draft: First Class Fuzzing
Author: Katie Hockman
[golang.org/s/draft-fuzzing-design](https://golang.org/s/draft-fuzzing-design)
This is the original **Design Draft**, not the formal Go proposal. The contents
of this page may be out-of-date.
The accepted Go proposal is Issue [#44551](https://golang.org/issue/44551)
## Abstract
Systems built with Go must be secure and resilient.
Fuzzing can help with this, by allowing developers to identify and fix bugs,
empowering them to improve the quality of their code.
However, there is no standard way of fuzzing Go code today, and no
out-of-the-box tooling or support.
This proposal will create a unified fuzzing narrative which makes fuzzing a
first class option for Go developers.
## Background
Fuzzing is a type of automated testing which continuously manipulates inputs to
a program to find issues such as panics, bugs, or data races to which the code
may be susceptible.
These semi-random data mutations can discover new code coverage that existing
unit tests may miss, and uncover edge-case bugs which would otherwise go
unnoticed.
This type of testing works best when able to run more mutations quickly, rather
than fewer mutations intelligently.
Since fuzzing can reach edge cases which humans often miss, fuzz testing is
particularly valuable for finding security exploits and vulnerabilities.
Fuzz tests have historically been authored primarily by security engineers, and
hackers may use similar methods to find vulnerabilities maliciously.
However, writing fuzz tests needn’t be constrained to developers with security
expertise.
There is great value in fuzz testing all programs, including those
which may be more subtly security-relevant, especially those working with
arbitrary user input.
Other languages support and encourage fuzz testing.
[libFuzzer](https://llvm.org/docs/LibFuzzer.html) and
[AFL](https://lcamtuf.coredump.cx/afl/) are widely used, particularly with
C/C++, and AFL has identified vulnerabilities in programs like Mozilla Firefox,
Internet Explorer, OpenSSH, Adobe Flash, and more.
In Rust,
[cargo-fuzz](https://fitzgeraldnick.com/2020/01/16/better-support-for-fuzzing-structured-inputs-in-rust.html)
allows for fuzzing of structured data in addition to raw bytes, allowing for
even more flexibility with authoring fuzz tests.
Existing tools in Go, such as go-fuzz, have many [success
stories](https://github.com/dvyukov/go-fuzz#trophies), but there is no fully
supported or canonical solution for Go.
The goal is to make fuzzing a first-class experience, making it so easy that it
becomes the norm for Go packages to have fuzz tests.
Having fuzz tests available in a standard format makes it possible to use them
automatically in CI, or even as the basis for experiments with different
mutation engines.
There is strong community interest for this.
It’s the third most supported
[proposal](https://github.com/golang/go/issues/19109) on the issue tracker (~500
+1s), with projects like [go-fuzz](https://github.com/dvyukov/go-fuzz) (3.5K
stars) and other community-led efforts that have been in the works for several
years.
Prototypes exist, but lack core features like robust module support, go command
integration, and integration with new [compiler
instrumentation](https://github.com/golang/go/issues/14565).
## Proposal
Support `Fuzz` functions in Go test files, making fuzzing a first class option
for Go developers through unified, end-to-end support.
## Rationale
One alternative would be to keep with the status quo and ask Go developers to
use existing tools, or build their own as needed.
Developers could use tools
like [go-fuzz](https://github.com/dvyukov/go-fuzz) or
[fzgo](https://github.com/thepudds/fzgo) (built on top of go-fuzz) to solve some
of their needs.
However, each existing solution involves more work than typical Go testing, and
is missing crucial features.
Fuzz testing shouldn’t be any more complicated, or any less feature-complete,
than other types of Go testing (like benchmarking or unit testing).
Existing solutions add extra overhead such as custom command line tools,
separate test files or build tags, lack of robust modules support, and lack of
testing/customization support from the standard library.
By making fuzzing easier for developers, we will increase the amount of Go code
that’s covered by fuzz tests.
This will have particularly high impact for heavily depended upon or
security-sensitive packages.
The more Go code that’s covered by fuzz tests, the more bugs will be found and
fixed in the wider ecosystem.
These bug fixes matter for the stability and security of systems written in Go.
The best solution for Go in the long-term is to have a feature-rich, fully
supported, unified narrative for fuzzing.
It should be just as easy to write fuzz tests as it is to write unit tests.
Developers should be able to use existing tools for which they are already
familiar, with small variations to support fuzzing.
Along with the language support, we must provide documentation, tutorials, and
incentives for Go package owners to add fuzz tests to their packages.
This is a measurable goal, and we can track the number of fuzz tests and
resulting bug fixes resulting from this design.
Standardizing this also provides new opportunities for other tools to be built,
and integration into existing infrastructure.
For example, this proposal creates consistency for building and running fuzz
tests, making it easier to build turnkey
[OSS-Fuzz](https://github.com/google/oss-fuzz) support.
In the long term, this design could start to replace existing table tests,
seamlessly integrating into the existing Go testing ecosystem.
Some motivations written or provided by members of the Go community:
* https://tiny.cc/why-go-fuzz
* [Around 400 documented bugs](https://github.com/dvyukov/go-fuzz#trophies)
were found by owners of various open-source Go packages with go-fuzz.
## Compatibility
This proposal will not impact any current compatibility promises.
It is possible that there are existing `FuzzX` functions in yyy\_test.go files
today, and the go command will emit an error on such functions if they have an
unsupported signature.
This should however be unlikely, since most existing fuzz tools don’t
support these functions within yyy\_test.go files.
## Implementation
There are several components to this design draft which are described below.
The big pieces to be supported in the MVP are: support for fuzzing built-in
types, structs, and types which implement the BinaryMarshaler and
BinaryUnmarshaler interfaces or the TextMarshaler and TextUnmarshaler
interfaces, a new `testing.F` type, full `go` command support, and building a
tailored-to-Go fuzzing engine using the [new compiler
instrumentation](https://golang.org/issue/14565).
There is already a lot of existing work that has been done to support this, and
we should leverage as much of that as possible when building native support,
e.g. [go-fuzz](https://github.com/dvyukov/go-fuzz),
[fzgo](https://github.com/thepudds/fzgo).
Work for this will be done in a dev branch (e.g. dev.fuzzing) of the main Go
repository, led by Katie Hockman, with contributions from other members of the
Go team and members of the community as appropriate.
### Overview
The **fuzz test** is a `FuzzX` function in a test file. Each fuzz test has
its own corpus of inputs.
The **fuzz target** is the function that is executed for every seed or
generated corpus entry.
At the beginning of the [fuzz test](#fuzz-test), a developer provides a
“[seed corpus](#seed-corpus)”.
This is an interesting set of inputs that will be tested using <code>[go
test](#go-command)</code> by default, and can provide a starting point for a
[mutation engine](#fuzzing-engine-and-mutator) if fuzzing.
The testing portion of the fuzz test happens in the fuzz target, which is the
function passed to `f.Fuzz`.
This function runs much like a standard unit test with `testing.T` for each
input in the seed corpus.
If the developer is fuzzing with the new `-fuzz` flag with `go test`, then a
[generated corpus](#generated-corpus) will be managed by the fuzzing engine, and
a mutator will generate new inputs to run against the testing function,
attempting to discover interesting inputs or [crashers](#crashers).
With the new support, a fuzz test could look like this:
```
func FuzzMarshalFoo(f *testing.F) {
// Seed the initial corpus
f.Add("cat", big.NewInt(1341))
f.Add("!mouse", big.NewInt(0))
// Run the fuzz test
f.Fuzz(func(t *testing.T, a string, num *big.Int) {
t.Parallel() // seed corpus tests can run in parallel
if num.Sign() <= 0 {
t.Skip() // only test positive numbers
}
val, err := MarshalFoo(a, num)
if err != nil {
t.Skip()
}
if val == nil {
t.Fatal("MarshalFoo: val == nil, err == nil")
}
a2, num2, err := UnmarshalFoo(val)
if err != nil {
t.Fatalf("failed to unmarshal valid Foo: %v", err)
}
if a2 == nil || num2 == nil {
t.Error("UnmarshalFoo: a==nil, num==nil, err==nil")
}
if a2 != a || !num2.Equal(num) {
t.Error("UnmarshalFoo does not match the provided input")
}
})
}
```
### testing.F
`testing.F` works similiarly to `testing.T` and `testing.B`.
It will implement the `testing.TB` interface.
Functions that are new and only apply to `testing.F` are listed below.
```
// Add will add the arguments to the seed corpus for the fuzz test. This
// cannot be invoked after or within the fuzz target. The args must match
// those in the fuzz target.
func (f *F) Add(args ...interface{})
// Fuzz runs the fuzz target, ff, for fuzz testing. While fuzzing with -fuzz,
// the fuzz test and ff may be run in multiple worker processes that don't
// share global state within the process. Only one call to Fuzz is allowed per
// fuzz test, and any subsequent calls will panic. If ff fails for a set of
// arguments, those arguments will be added to the seed corpus.
func (f *F) Fuzz(ff interface{})
```
### Fuzz test
A fuzz test has two main components: 1) seeding the corpus and 2) the fuzz
target which is executed for items in the corpus.
1. Defining the seed corpus and any necessary setup work is done before the
fuzz target, to prepare for fuzzing.
These inputs, as well as those in `testdata/corpus/FuzzTest`, are run by
default with `go test`.
1. The fuzz target is executed for each item in the seed corpus.
If this fuzz test is being fuzzed, then new inputs will be generated and
continously tested using the fuzz target.
The arguments to `f.Add(...)` and the arguments of the fuzz target must be the
same type within the fuzz test, and there must be at least one argument
specified.
This will be ensured by a vet check.
Fuzzing of built-in types (e.g. simple types, maps, arrays) and types which
implement the BinaryMarshaler and TextMarshaler interfaces are supported.
In the future, structs that do not implement the BinaryMarshaler and
TextMarshaler interfaces may be supported by building them based on their
exported fields.
Interfaces, functions, and channels are not appropriate types to fuzz, so will
never be supported.
### Seed Corpus
The **seed corpus** is the user-specified set of inputs to a fuzz test which
will be run by default with go test.
These should be composed of meaningful inputs to test the behavior of the
package, as well as a set of regression inputs for any newly discovered bugs
identified by the fuzzing engine.
This set of inputs is also used to “seed” the corpus used by the fuzzing engine
when mutating inputs to discover new code coverage.
A good seed corpus can save the mutation engine a lot of work (for example
adding a new key type to a key parsing function).
Each fuzz test will always look in the package’s `testdata/corpus/FuzzTest`
directory for an existing seed corpus to use, if one exists.
New crashes will also be written to this directory.
The seed corpus can be populated programmatically using `f.Add` within the fuzz
test.
_Examples:_
1: A fuzz target takes a single `[]byte`.
```
f.Fuzz(func(t *testing.T, b []byte) {...})
```
This is the typical “non-structured fuzzing” approach, and only the single
[]byte will be mutated while fuzzing.
2: A fuzz target takes two arguments.
```
f.Fuzz(func(t *testing.T, a string, num *big.Int) {...})
```
This example uses string, which is a built-in type, and as such can be decoded directly.
`*big.Int` implements `UnmarshalText`, so can also be unmarshaled using that
method.
The mutator will alter the bytes of both the string and the *big.Int while
seeking new code coverage.
### Corpus file encoding
The `testdata/corpus` directory will hold corpus files which act as the seed
corpus as well as a set of regression tests for identified crashers.
Corpus files must be encoded to support multiple fuzzing arguments.
The first line of the corpus file indicates the encoding "version" of this file,
which for version 1 will be `go test fuzz v1`.
This is to indicate how the file was encoded, which allows for new, improved
encodings in the future.
For version 1, each subsequent line represents the value of each type making up
the corpus entry. Each line is copy-pastable directly into Go code. The only
case where the line would require editing is for imported struct types, in which
case the import path would be removed when used in code.
For example:
```
go test fuzz v1
float(45.241)
int(12345)
[]byte("ABC\xa8\x8c\xb3G\xfc")
example.com/foo.Bar.UnmarshalText("\xfe\x99Uh\xb4\xe29\xed")
```
A tool will be provided that can convert between binary files and corpus files
(in both directions).
This tool would serve two main purposes.
It would allow binary files, such as images, or files from other fuzzers, to be
ported over into seed corpus for Go fuzzing.
It would also convert otherwise indecipherable hex bytes into a binary format
which may be easier to read and edit.
To make it easier to understand new crashes, each crash found by the fuzzing
engine will be written to a binary file in $GOCACHE.
This file should not be checked in, as the crash will have already been written
to a corpus file in testdata within the module.
Instead, this file is a way to quickly get an idea about the input which caused
the crash, without requiring a tool to decode it.
### Fuzzing Engine and Mutator
A new **coverage-guided fuzzing engine**, written in Go, will be built.
This fuzzing engine will be responsible for using compiler instrumentation to
understand coverage information, generating test arguments with a mutator, and
maintaining the corpus.
The **mutator** is responsible for working with a generator to mutate bytes to
be used as input to the fuzz test.
Take the following fuzz target arguments as an example.
```
A string // N bytes
B int64 // 8 bytes
Num *big.Int // M bytes
```
A generator will provide some bytes for each type, where the number of bytes
could be constant (e.g. 8 bytes for an int64) or variable (e.g. N bytes for a
string, likely with some upper bound).
For constant-length types, the number of bytes can be hard-coded into the
fuzzing engine, making generation simpler.
For variable-length types, the mutator is responsible for varying the number of
bytes requested from the generator.
These bytes then need to be converted to the types used by the fuzz target.
The string and other built-in types can be decoded directly.
For other types, this can be done using either
<code>[UnmarshalBinary](https://pkg.go.dev/encoding?tab=doc#BinaryUnmarshaler)</code>
or
<code>[UnmarshalText](https://pkg.go.dev/encoding?tab=doc#TextUnmarshaler)</code>
if implemented on the type.
In the future, it may support fuzzing struct types which don't implement these
marshalers by building it through its exported fields.
#### Generated corpus
A generated corpus will be managed by the fuzzing engine and will live outside
the module in a subdirectory of $GOCACHE.
This generated corpus will grow as the fuzzing engine discovers new coverage.
The details of how the corpus is built and processed should be unimportant to
users.
This should be a technical detail that developers don’t need to understand in
order to seed a corpus or write a fuzz test.
Any existing files that a developer wants to include in the fuzz test may be
added to the seed corpus.
### Crashers
A **crasher** is a panic or failure in the fuzz target, or a race condition,
which was found while fuzzing.
By default, the fuzz test will stop after the first crasher is found, and a
crash report will be provided.
Crash reports will include the inputs that caused the crash and the resulting
error message or stack trace.
The crasher inputs will be written to the package's testdata/corpus directory as
after being minified where possible.
Since this crasher is added to testdata/corpus, which will then be run by
default as part of the seed corpus for the fuzz test, this can act as a test
for the new failure.
A user experience may look something like this:
1. A user runs `go test -fuzz=FuzzFoo`, and a crasher is found while fuzzing.
1. The arguments that caused the crash are added to the testdata/corpus
directory of that package.
1. A subsequent run of `go test` (without needing `-fuzz=FuzzFoo`) will then
reproduce this crash, and continue to fail until the bug is fixed.
A user could also run `go test -run=FuzzFoo/<filename>` to only run a
specific file in the testdata/corpus directory when debugging.
### Go command
Fuzz testing will only be supported in module mode, and if run in GOPATH mode,
the fuzz tests will be ignored.
Fuzz tests will be in *_test.go files, and can be in the same file as Test and
Benchmark targets.
These test files can exist wherever *_test.go files can currently live, and do
not need to be in any fuzz-specific directory or have a fuzz-specific file name
or build tag.
The generated corpus will be in a new directory within `$GOCACHE`, in the form
$GOCACHE/fuzz/$pkg/$test/$name, where $pkg is the package path containing the
fuzz test, $test is the name of the fuzz test, and $name is the name of the
file.
The default behavior of `go test` will be to build and run the fuzz tests
using the seed corpus only.
No special instrumentation would be needed, the mutation engine would not run,
and the test can be cached as usual.
This default mode **will not** run the generated corpus against the fuzz test.
This is to allow for reproducibility and cacheability for `go test` executions
by default.
In order to run a fuzz test with the mutation engine, `-fuzz` will take a
regexp which must match only one fuzz test.
In this situtation, only the fuzz test will run (ignoring all other tests).
Only one package is allowed to be tested at a time in this mode.
The following flags will be added or have modified meaning:
```
-fuzz name
Run the fuzz test with the given regexp. Must match at most one fuzz
test.
-fuzztime
Run enough iterations of the fuzz test to take t, specified as a
time.Duration (for example, -fuzztime 1h30s).
The default is to run forever.
The special syntax Nx means to run the fuzz test N times
(for example, -fuzztime 100x).
-keepfuzzing
Keep running the fuzz test if a crasher is found. (default false)
-parallel
Allow parallel execution of a fuzz target that calls t.Parallel when
running the seed corpus.
While fuzzing with -fuzz, the value of this flag is the maximum number of
workers to run the fuzz target simultaneously; by default, it is set to
the value of GOMAXPROCS.
Note that -parallel only applies within a single test binary.
-race
Enable data race detection while fuzzing. (default false)
-run
Run only those tests, examples, and fuzz tests matching the regular
expression.
For testing a single seed corpus entry for a fuzz test, the regular
expression can be in the form $test/$name, where $test is the name of
the fuzz test, and $name is the name of the file (ignoring file
extensions) to run.
```
`go test` will not respect `-p` when running with `-fuzz`, as it doesn't make
sense to fuzz multiple packages at the same time.
There will also be a new flag, `-fuzzcache` introduced to `go clean`.
When this flag is not set, `go clean` will not automatically remove generated
corpus files, even though they are written into a subdirectory of `$GOCACHE`.
In order to remove the generated corpus files, one must run
`go clean -fuzzcache`, which will remove all generated corpus in `$GOCACHE`.
## Open questions and future work
### Fuzzing engine supports multiple fuzz tests at once
The current design allows matching one and only one fuzz test with `-fuzz` per
package.
This is to eliminate complexity in the early prototype, and move towards a
working solution as quickly as possible.
However, there are use cases for matching more than one fuzz test with
`-fuzz`.
For example, in the cases where developers want to fuzz an entire package over a
long period of time, it would be useful for the fuzzing engine to support
cycling around multiple fuzz tests at once with a single `go test -fuzz`
command.
This is likely to be considered in future iterations of the design.
### Options
There are options that developers often need to fuzz effectively and safely.
These options will likely make the most sense on a test-by-test basis,
rather than as a `go test` flag.
Which options to make available, and precisely how these will be defined still
needs some investigation.
For example, it could look something like this:
```
func FuzzFoo(f *testing.F) {
f.MaxInputSize(1024)
f.Fuzz(func(t *testing.T, a string) {
...
})
}
```
### Flag for generated corpus directory
Developers may prefer to store the generated corpus in a seperate repository, in cloud storage, or some other shared location, rather than in each developer's `$GOCACHE`.
The details about how best to support developers with these use cases still
needs investigation, and is not a requirement for the MVP.
However, there may be support for a `-fuzzdir` flag (or something similar) in
the future, which specifies the location of the generated corpus.
### Dictionaries
Support accepting [dictionaries](https://llvm.org/docs/LibFuzzer.html#id31) when
seeding the corpus to guide the fuzzer.
### Instrument specific packages only
We might need a way to specify to instrument only some packages for coverage,
but there isn’t enough data yet to be sure.
One example use case for this would be a fuzzing engine which is spending too
much time discovering coverage in the encoding/json parser, when it should
instead be focusing on coverage for some intended package.
There are also questions about whether or not this is possible with the current
compiler instrumentation available.
By runtime, the fuzz test will have already been compiled, so recompiling to
leave out (or only include) certain packages may not be feasible.
### Custom Generators
There may be a need for developers to craft custom generators for use by the
mutator.
The design can support this by using marshaling/unmarshaling to edit certain
fields, but the work to do so is a bit cumbersome.
For example, if a string should always have some prefix in order to work in the
fuzz target, one could do the following.
```
type myString struct {
s string
}
func (m *myString) MarshalText() (text []byte, err error) {
return []byte(m.s[len("SOME_PREFIX"):]), nil
}
func (m *myString) UnmarshalText(text []byte) error {
m.s = "SOME_PREFIX" + string(text)
return nil
}
func FuzzFoo(f *testing.F) {
f.Fuzz(func(t *testing.T, m *myString) {...})
}
``` |
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/37720-gopls-workspaces.md | # Proposal: Multi-project gopls workspaces
Author(s): Heschi Kreinick, Rebecca Stambler
Last updated: [Date]
Discussion at https://golang.org/issue/32394.
(Previously, https://golang.org/issue/37720)
## Abstract
We propose a new workspace model for gopls that supports editing multiple
projects at the same time, without compromising editor features.
## Background
`gopls` users may want to edit multiple projects in one editor session.
For example, a microservice might depend on a proprietary infrastructure
library, and a feature might require working across both. In `GOPATH` mode,
that's relatively trivial, because all code exists in the same context. In
module mode, where multiple versions of dependencies are in play, it is much
more difficult.
Consider the following application:
![Diagram of a single application](37720/Fig1.png)
If I `Find References` on an `errors.Wrapf` call in `app1`, I expect to see
references in `lib` as well. This is especially true if I happen to own `lib`,
but even if not I may be looking for usage examples. In this situation,
supporting that is easy.
Now consider a workspace with two applications.
![Diagram of two applications](37720/Fig2.png)
Again, I would expect a `Find References` in either App1 or App2 to find all
`Wrapf` calls, and there's no reason that shouldn't work in this scenario. In
module mode, things can be more difficult. Here's the next step in complexity:
![Diagram of two applications with different lib versions](37720/Fig3.png)
At the level of the type checker, `v1.0.0` and `v1.0.1` of the library are
completely unrelated packages that happen to share a name. We as humans expect
the APIs to match, but they could be completely different. Nonetheless, in this
situation we can simply load both, and if we do a `Find References` on `Wrapf`
there should be no problem finding all of them.
That goes away in the next step:
![Diagram of two applications with different versions of all deps](37720/Fig4.png)
Now there are two versions of `Wrapf`.
Again, at the type-checking level, these packages have nothing to do with each
other. There is no easy way to relate `Wrapf` from `v0.9.1` with its match from
`v0.9.0`. We would have to do a great deal of work to correlate all the
versions of a package together and match them up. (Wrapf is a simple case;
consider how we'd match them if it was a method receiver, or took a complex
struct, or a type from another package.) Worse yet, how would a multi-project
rename work? Would it rename in all versions?
One final case:
![Diagram of an application with dependency fan-out](37720/Fig5.png)
Imagine I start in App1 and `Go To Definition` on a function from the utility
library. So far, no problem: there's only one version of the utility library in
scope. Now I `Go To Definition` on `Wrapf`.
Which version should I go to?
The answer depends on where I came from, but that information can't be
expressed in the filesystem path of the source file, so there's no way for
`gopls` to keep track.
## Proposal
We propose to require all projects in a multi-project workspace use the same
set of dependency versions. For `GOPATH`, this means that all the projects
should have the same `GOPATH` setting. For module mode, it means creating one
super-module that forces all the projects to resolve their dependencies
together. Effectively, this would create an on-the-fly monorepo.
This rules out users working on projects with mutually conflicting
requirements, but that will hopefully be rare.
Hopefully `gopls` can create this super-module automatically.
The super-module would look something like:
```
module gopls-workspace
require (
example.com/app1 v0.0.0-00010101000000-000000000000
example.com/app2 v0.0.0-00010101000000-000000000000
)
replace (
example.com/app1 => /abs/path/to/app1
example.com/app2 => /abs/path/to/app2
// Further replace directives included from app1 and app2
)
```
Note the propagation of replace directives from the constituent projects, since
they would otherwise not take effect.
## Rationale
For users to get the experience they expect, with all of the scenarios above
working, the only possible model is one where there's one version of any
dependency in scope at a time.
We don't think there are any realistic alternatives to this model.
We could try to include multiple versions of packages and then correlate them
by name and signature (as discussed above) but that would be error-prone to
implement. And if there were any problems matching things up, features like
`Find References` would silently fail.
## Compatibility
No compatibility issues.
## Implementation
The implementation involves (1) finding all of the modules in the view,
(2) automatically creating the super-module, and (3) adjusting gopls's
[go/packages] queries and `go` command calls to run in the correct modules.
When a view is created, we traverse the view's root folder and search for all
of the existing modules. These modules will then be used to programmatically
create the super-module. Once each view is created, it will load all of its
packages (initial workspace load). As of 2020-08-13, for views in GOPATH or in
a module, the initial workspace load takes the form of a `go list ./...` query.
With the current design, the initial workspace load will need be a query of the
form: `go list example.com/app1/... example.com/app12/...`, within the
super-module. In GOPATH mode, we will not create the super-module.
All [go/packages] queries should be made from the super-module directory. Only
`go mod` commands need to be made from the module to which they refer.
### The super-module's `go.mod` file
As of 2020-08-13, `gopls` relies on the `go` command's `-modfile` flag to avoid
modifying the user's existing `go.mod` file. We will continue to use the
`-modfile` flag when running the `go` command from within a module, but
`-modfile` is no longer necessary when we run the `go` command from the
super-module.
The `go` command does require that its working directory contain a `go.mod`
file, but we want to run commands from the super-module without exposing
super-module's `go.mod` file to the user. To handle this, we will create a
temporary directory containing the super-module's `go.mod` file, to act as the
module root for any [go/packages] queries.
### Configuration
#### `gopls.mod`
We should allow users to provide their own super-module `go.mod` file, for
extra control over the developer experience. This can also be used to mitigate
any issues with the automatic creation of the super-module. We should detect a
`gopls.mod` in the view's root folder and use that as the super-module if
present.
## Additional Considerations
The authors have not yet considered the full implications of this design on:
* Nested modules
* A workspace pattern of `module`/... will include packages in nested modules
inside it, whether the user wants them or not.
* Modules with replace directives (mentioned briefly above)
* Views containing a single module within the view's root folder
* Consider not creating a super-module at all
If any issues are noted during the implementation process, this document will
be updated accordingly.
This design means that there is no longer any need to have multiple views in a
session. The `gopls` team will need to reconsider whether there is value in
offering users a standalone workspace for each workspace folder, rather than
merging all workspace folders into one view.
## Transition
Users have currently been getting support for multiple modules in `gopls` by
adding each module as its own workspace folder. Once the implementation is
complete, we will need to help users transition to this new model--otherwise
they will find that memory consumption rises, as `gopls` will have loaded the
same module into memory multiple times. We will need to detect if a workspace
folder is part of multiple views and alert the user to adjust their workspace.
While the `gopls` team implements this design, the super-module functionality
will be gated behind an opt-in flag.
## Open issues (if applicable)
The `go.mod` editing functionality of `gopls` should continue to work as it
does today, even in multi-project mode. Most likely it should simply continue
to operate in one project at a time.
[go/packages]: https://golang.org/x/tools/go/packages
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/12302-release-proposal.md | # Proposal: A minimal release process for Go repositories
Author: Dave Cheney <dave@cheney.net>
Last updated: 03 December 2015
Status: Withdrawn
Discussion at https://golang.org/issue/12302.
## Abstract
In the same way that gofmt defines a single recommended way to format Go source
code, this proposal establishes a single recommended procedure for releasing the
source code for Go programs and libraries.
This is intended to be a light weight process to facilitate tools that automate
the creation and consumption of this release information.
## Background
Releasing software is useful. It separates the every day cut and thrust of
software development, patch review, and bug triage, from the consumers of the
software, a majority of whom are not developers of your software and only wish
to be concerned with the versions that you tell them are appropriate to use.
For example, the Go project itself offers a higher level of support to users
who report bugs against our released versions.
In fact we specifically recommend against people using unreleased versions in
production.
A key differentiator between released and unreleased software is the version
number.
Version numbers create a distinct identifier that increments at its own pace
and under different drivers to the internal identifier of the version control
system (VCS) or development team.
## Proposal
This proposal describes a minimal release procedure of tagging a repository
which holds the source of one or more Go packages.
### Release process
This proposal recommends that repository owners adopt the
[Semantic Versioning 2.0 standard](https://SemVer.org/spec/v2.0.0.html) (SemVer)
for their numbering scheme.
Source code is released by tagging (eg. `git tag`) the VCS repository with a
string representing a SemVer compatible version number for that release.
This proposal is not restricted to git, any VCS that has the facility to assign
a tag-like entity to a revision is supported.
### Tag format
The format of the VCS tag is as follows:
```
v<SemVer>
```
That is, the character `v`, U+0075, followed directly by a string which is
compliant with the [Semantic Versioning 2.0 standard](https://SemVer.org/spec/v2.0.0.html).
When inspecting a repository, tags which do not fit the format described above
must be ignored for the purpose of determining the available release versions.
SemVer requires that a once a version number has been assigned, it must not
change, thus a tag, once assigned must not be reused.
## Rationale
Go libraries and programs do not have version numbers in the way it is commonly
understood by our counterparts in other languages communities.
This is because there is no formalised notion of releasing Go source code.
There is no recognised process of taking an arbitrary VCS commit hash and
assigning it a version number that is meaningful for both humans and machines.
Additionally, operating system distributors such as Debian and Ubuntu strongly
prefer to package released versions of a library or application, and are
currently reduced to
[doing things like this](https://packages.debian.org/stretch/golang-github-odeke-em-command-dev).
In the spirit of doing less and enabling more, this proposal establishes the
minimum required for humans and tools to identify released versions by inspecting
source code repositories.
It is informed by the broad support for semantic versioning across our
contemporaries like node.js (npm), rust (cargo), javascript (bower), and ruby
(rubygems), thereby allowing Go programmers to benefit from the experiences of
these other communities' dependency management ecosystems.
### Who benefits from adopting this proposal ?
This proposal will immediately benefit the downstream consumers of Go libraries
and programs. For example:
- The large ecosystem of tools like godeps, glide, govendor, gb, the
vendor-spec proposal and dozens more, that can use this information to
provide, for example, a command that will let users upgrade between minor
versions, or update to the latest patch released of their dependencies rather
than just the latest HEAD of the repository.
- Operating system distributions such as Debian, Fedora, Ubuntu, Homebrew, rely
on released versions of software for their packaging policies.
They don't want to pull random git hashes into their archives, they want to
pull released versions of the code and have release numbers that give them a
sense of how compatible new versions are with previous version.
For example, Ubuntu have a policy that we only accept patch releases into our
LTS distribution; no major version changes, no minor version changes that
include new features, only bug fixes.
- godoc.org could show users the documentation for the version of the package
they were using, not just whatever is at HEAD.
That `go get` cannot consume this version information today should not be an
argument against enabling other tools to do so.
### Why recommend SemVer ?
Applying an opaque release tag is not sufficient, the tag has to contain enough
semantic meaning for humans and tools to compare two version numbers and infer
the degree of compatibility, or incompatibility between them.
This is the goal of semantic versioning.
To cut to the chase, SemVer is not a magic bullet, it cannot force developers
to not do the wrong thing, only incentivise them to do the right thing.
This property would hold true no matter what version numbering methodology
was proposed, SemVer or something of our own concoction.
There is a lot to gain from working from a position of assuming Go programmers
want to do the right thing, not engineer a straight jacket process which
prevents them from doing the wrong thing.
The ubiquity of gofmt'd code, in spite of the fact the compiler allows a much
looser syntax, is evidence of this.
Adherence to a commonly accepted ideal of what constitutes a major, minor and
patch release is informed by the same social pressures that drive Go
programmers to gofmt their code.
### Why not allow the v prefix to be optional ?
The recommendation to include the `v` prefix is for compatibility with the
three largest Go projects, Docker, Kubernetes, and CoreOS, who have already
adopted this form.
Permitting the `v` prefix to be optional would mean some authors adopt it, and
others do not, which is a poor position for a standard.
In the spirit of gofmt, mandating the `v` prefix across the board means there
is exactly one tag form for implementations to parse, and outweighs the
personal choice of an optional prefix.
## Compatibility
There is no impact on the [compatibility guidelines](https://golang.org/doc/go1compat)
from this proposal.
However, in the past, members of the Go team have advocated that when a libraries'
API changes in an incompatible way, the import path of the library should be
changed, usually including a version number as a component of the import path.
This proposal deprecates this recommendation.
Authors of Go libraries should follow these two maxims:
1. Packages which are the same, must share the same import path. This proposal
provides the mechanism for consumers to identify a specific release version
without the requirement to encode that information in the import path.
2. Packages which are not the same, must not have the same import path. A clone or
fork of a library or project is not the same as its parent, so it should have
a new name -- a new import path.
## Implementation
A summary of this proposal, along with examples and a link to this proposal,
will be added to the [How to write Go Code)(http://golang.org/doc/code.html#remote)
section of the [golang.org](https://golang.org) website.
Authors who wish to release their software must use a tag in the form described
above. An example would be:
```
% git tag -a v1.0.0 -m "release version 1.0.0"
% git push --tags
```
Authors are not prohibited from using other methods of releasing their software,
but should be aware that if those methods do not conform to the format described
above, those releases may be invisible to tools confirming to this proposal.
There is no impact on the Go release cycle, this proposal is not bound by a
deliverable in the current release cycle.
## Out of scope
The following items are out of scope of this proposal:
- How libraries and applications can declare the version numbers or ranges for
their dependencies.
- How go get may be changed to consume this version information.
Additionally, this proposal not seek to change the release process, or version
numbering scheme for the Go (https://golang.org) distribution itself.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/44253-generic-array-sizes.md | # Proposal: Generic parameterization of array sizes
Author(s): Andrew Werner
Last updated: March 16th, 2021
## Abstract
With the type parameters generics proposal has been accepted, though not yet
fully specified or implemented, we can begin to talk about extension. [That
proposal][type parameters] lists the following omission:
> No parameterization on non-type values such as constants. This arises most
obviously for arrays, where it might sometimes be convenient to write type
`Matrix[n int] [n][n]float64`. It might also sometimes be useful to specify
significant values for a container type, such as a default value for elements.
This proposal seeks to resolve this limitation by (a) specifying when `len` can
be used as a compile-time constant and (b) adding syntax to specify constraints
for all arrays of a given type in type lists.
## Background
An important property of the generics proposal is that it enables the creation
of libraries of specialized container data structures. The existence of such
libraries will help developers write more efficient code as these data
structures will be able to allocate fewer object and provide greater access
locality. [This Google blog post][block based data structures] about block-based
C++ data drives home the point.
The justification is laid out in the omission of the type parameter proposal.
The motivation that I've stumbled upon is in trying to implement a B-Tree
and allowing the client to dictate the degree of the node.
One initial idea would be to allow the client to provide the actual array
which will be backing the data inside the node as a type parameter. This might
actually be okay in some data structure user cases but in a B-Tree it's bad
because we still would like to instantiate an array for the pointers and that
array needs to have a size that is a function of the data array.
The proposal here seeks to make it possible for clients to provide default
values for array sizes of generic data structures in a way that is minimally
invasive to the concepts which go already has. The shorthand comment stated
in the Omission of the Type Parameter Proposal waves its hand at what feels
like a number of new and complex concepts for the language.
## Proposal
This proposals attempts to side-step questions of how one might provide a
scalar value in a type constraint by not ever providing a scalar directly.
This proposal recognizes that constants can be used to specify array lengths.
It also notes that the value of `len()` can be computed as a compile-time
constant in some cases. Lastly, it observes that type lists could be extended
minimally to indicate a constraint that a type is an array of a given type
without constraining the length of the array.
### The vanilla generic B-Tree
Let's explore the example of a generic B-Tree with a fixed-size buffer. Find
such an example [here][vanilla btree].
```go
// These constants are the wart.
const (
degree = 16
maxItems = 2*degree - 1
minItems = degree - 1
)
func NewBTree[K, V any](cmp LessFn[K]) OrderedMap[K, V] {
return &btree[K, V]{cmp: cmp}
}
type btree[K, V any] struct {
cmp LessFn[K]
root *node[K, V]
}
// ...
type node[K, V any] struct {
count int16
leaf bool
keys [maxItems]K
vals [maxItems]V
children [maxItems + 1]*node[K, V]
}
```
### Parameterized nodes
Then we allow parameterization on the node type within the btree implementation
so that different node concrete types with different memory layouts may be
used. Find an example of this generalization
[here][parameterized node btree].
```go
type nodeI[K, V, N any] interface {
type *N
find(K, LessFn[K]) (idx int, found bool)
insert(K, V, LessFn[K]) (replaced bool)
remove(K, LessFn[K]) (K, V, bool)
len() int16
at(idx int16) (K, V)
child(idx int16) *N
isLeaf() bool
}
func NewBTree[K, V any](cmp LessFn[K]) OrderedMap[K, V] {
type N = node[K, V]
return &btree[K, V, N, *N]{
cmp: cmp,
newNode: func(isLeaf bool) *N {
return &N{leaf: isLeaf}
},
}
}
type btree[K, V, N any, NP nodeI[K, V, N]] struct {
len int
cmp LessFn[K]
root NP
newNode func(isLeaf bool) NP
}
type node[K, V any] struct {
count int16
leaf bool
keys [maxItems]K
vals [maxItems]V
children [maxItems + 1]*node[K, V]
}
```
This still ends up using constants and there's no really easy
way around that. You might want to parameterize the arrays into the node like
in [this example][bad parameterization btree]. This still
doesn't tell a story about how to relate the children array to the items.
### The proposal to parameterize the arrays
Instead, we'd like to find a way to express the idea that there's a size
constant which can be used in the type definitions. The proposal would
result in an implementation that looked like
[this][proposal btree].
```go
// StructArr is a constraint that says that a type is an array of empty
// structs of any length.
type StructArr interface {
type [...]struct{}
}
type btree[K, V, N any, NP nodeI[K, V, N]] struct {
len int
cmp LessFn[K]
root NP
newNode func(isLeaf bool) NP
}
// NewBTree constructs a generic BTree-backed map with degree 16.
func NewBTree[K, V any](cmp LessFn[K]) OrderedMap[K, V] {
const defaultDegree = 16
return NewBTreeWithDegree[K, V, [defaultDegree]struct{}](cmp)
}
// NewBTreeWithDegree constructs a generic BTree-backed map with degree equal
// to the length of the array used as type parameter A.
func NewBTreeWithDegree[K, V any, A StructArr](cmp LessFn[K]) OrderedMap[K, V] {
type N = node[K, V, A]
return &btree[K, V, N, *N]{
cmp: cmp,
newNode: func(isLeaf bool) *N {
return &N{leaf: isLeaf}
},
}
}
type node[K, V any, A StructArr] struct {
count int16
leaf bool
keys [2*len(A) - 1]K
values [2*len(A) - 1]V
children [2 * len(A)]*node[K, V, A]
}
```
### The Matrix example
The example of the omission in type parameter proposal could be achieved in
the following way:
```go
type Dim interface {
type [...]struct{}
}
type SquareFloatMatrix2D[D Dim] [len(D)][len(D)]float64
```
### Summary
1) Support type list constraints to express that a type is an array
```go
// Array expresses a constraint that a type is an array of T of any
// size.
type Array[T any] interface {
type [...]T
}
```
2) Support a compile-time constant expression for `len([...]T)`
This handy syntax would permit parameterization of arrays relative to other
array types. Note that the constant expression `len` function on array types
could actually be implemented today using `unsafe.Sizeof` by a parameterization
over an array whose members have non-zero size. For example `len` could be
written as `unsafe.Sizeof([...]T)/unsafe.Sizeof(T)` so long as
`unsafe.Sizeof(T) > 0`.
## Rationale
This approach is simpler than generally providing a constant scalar expression
parameterization of generic types. Of the two elements of the proposal, neither
feels particularly out of line with the design of the language or its concepts.
The `[...]T` syntax exists in the language to imply length inference for array
literals and is not a hard to imagine concept. It is the deeper requirement to
make this proposal work.
One potential downside of this proposal is that we're not really using the
array for anything other than its size which may feel awkward. For that reason
I've opted to use a constraint which forces the array to use `struct{}` values
to indicate that the structure of the elements isn't relevant. This awkwardness
feels justified to side-step introduces scalars to type parameters.
## Compatibility
This proposal is fully backwards compatible with all of the language and also
the now accepted type parameters proposal.
## Implementation
Neither of the two features of this proposal feel particularly onerous to
implement. My guess is that the `[...]T` type list constraint would be extremely
straightforward given an implementation of type parameters. The `len`
implementation is also likely to be straightforward given the existence of
both compile-time evaluation of `len` expressions on array types which exist
in the language and the constant nature of `unsafe.Sizeof`. Maybe there'd be
some pain in deferring the expression evaluation until after type checking.
[type parameters]: https://go.googlesource.com/proposal/+/refs/heads/master/design/go2draft-type-parameters.md
[block based data structures]: https://opensource.googleblog.com/2013/01/c-containers-that-save-memory-and-time.html
[vanilla btree]: https://go2goplay.golang.org/p/A5auAIdW2ZR
[parameterized node btree]: https://go2goplay.golang.org/p/TFn9BujIlc3
[bad parameterization btree]: https://go2goplay.golang.org/p/JGgyabtu_9F
[proposal btree]: https://go2goplay.golang.org/p/4o36RLxF73C |
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/51430-revamp-code-coverage.md | # Proposal: extend code coverage testing to include applications
Author(s): Than McIntosh
Last updated: 2022-03-02
Discussion at https://golang.org/issue/51430
## Abstract
This document contains a proposal for improving/revamping the system used in Go
for code coverage testing.
## Background
### Current support for coverage testing
The Go toolchain currently includes support for collecting and reporting
coverage data for Go unit tests; this facility is made available via the "go
test -cover" and "go tool cover" commands.
The current workflow for collecting coverage data is baked into "go test"
command; the assumption is that the source code of interest is a Go package
or set of packages with associated tests.
To request coverage data for a package test run, a user can invoke the test(s)
via:
```
go test -coverprofile=<filename> [package target(s)]
```
This command will build the specified packages with coverage instrumentation,
execute the package tests, and write an output file to "filename" with the
coverage results of the run.
The resulting output file can be viewed/examined using commands such as
```
go tool cover -func=<covdatafile>
go tool cover -html=<covdatafile>
```
Under the hood, the implementation works by source rewriting: when "go test" is
building the specified set of package tests, it runs each package source file
of interest through a source-to-source translation tool that produces an
instrumented/augmented equivalent, with instrumentation that records which
portions of the code execute as the test runs.
A function such as
```Go
func ABC(x int) {
if x < 0 {
bar()
}
}
```
is rewritten to something like
```Go
func ABC(x int) {GoCover_0_343662613637653164643337.Count[9] = 1;
if x < 0 {GoCover_0_343662613637653164643337.Count[10] = 1;
bar()
}
}
```
where "GoCover_0_343662613637653164643337" is a tool-generated structure with
execution counters and source position information.
The "go test" command also emits boilerplate code into the generated
"_testmain.go" to register each instrumented source file and unpack the coverage
data structures into something that can be easily accessed at runtime.
Finally, the modified "_testmain.go" has code to call runtime routines that
emit the coverage output file when the test completes.
### Strengths and weaknesses of what we currently provide
The current implementation is simple and easy to use, and provides a good user
experience for the use case of collecting coverage data for package unit tests.
Since "go test" is performing both the build and the invocation/execution of the
test, it can provide a nice seamless "single command" user experience.
A key weakness of the current implementation is that it does not scale well-- it
is difficult or impossible to gather coverage data for **applications** as opposed
to collections of packages, and for testing scenarios involving multiple
runs/executions.
For example, consider a medium-sized application such as the Go compiler ("gc").
While the various packages in the compiler source tree have unit tests, and one
can use "go test" to obtain coverage data for those tests, the unit tests by
themselves only exercise a small fraction of the code paths in the compiler that
one would get from actually running the compiler binary itself on a large
collection of Go source files.
For such applications, one would like to build a coverage-instrumented copy of
the entire application ("gc"), then run that instrumented application over many
inputs (say, all the Go source files compiled as part of a "make.bash" run for
multiple GOARCH values), producing a collection of coverage data output files,
and finally merge together the results to produce a report or provide a
visualization.
Many folks in the Golang community have run into this problem; there are large
numbers of blog posts and other pages describing the issue, and recommending
workarounds (or providing add-on tools that help); doing a web search for
"golang integration code coverage" will turn up many pages of links.
An additional weakness in the current Go toolchain offering relates to the way
in which coverage data is presented to the user from the "go tool cover")
commands.
The reports produced are "flat" and not hierarchical (e.g. a flat list of
functions, or a flat list of source files within the instrumented packages).
This way of structuring a report works well when the number of instrumented
packages is small, but becomes less attractive if there are hundreds or
thousands of source files being instrumented.
For larger applications, it would make sense to create reports with a more
hierarchical structure: first a summary by module, then package within module,
then source file within package, and so on.
Finally, there are a number of long-standing problems that arise due to the use of source-to-source rewriting used by cmd/cover and the go command, including
[#23883](https://github.com/golang/go/issues/23883)
"cmd/go: -coverpkg=all gives different coverage value when run on a
package list vs ./..."
[#23910](https://github.com/golang/go/issues/23910)
"cmd/go: -coverpkg packages imported by all tests, even ones that
otherwise do not use it"
[#27336](https://github.com/golang/go/issues/27336)
"cmd/go: test coverpkg panics when defining the same flag in
multiple packages"
Most of these problems arise because of the introduction of additional imports in the `_testmain.go` shim created by the Go command when carrying out a coverage test run (in combination with the "-coverpkg" option).
## Proposed changes
### Building for coverage
While the existing "go test" based coverage workflow will continue to be supported, the proposal is to add coverage as a new build mode for "go build".
In the same way that users can build a race-detector instrumented executable using "go build -race", it will be possible to build a coverage-instrumented executable using "go build -cover".
To support this goal, the plan will be to migrate a portion of the support for coverage instrumentation into the compiler, while still retaining the existing source-to-source rewriting strategy (a so-called "hybrid" approach).
### Running instrumented applications
Applications are deployed and run in many different ways, ranging from very
simple (direct invocation of a single executable) to very complex (e.g. gangs of
cooperating processes involving multiple distinct executables).
To allow for more complex execution/invocation scenarios, it doesn't make sense
to try to serialize updates to a single coverage output data file during the
run, since this would require introducing synchronization or some other
mechanism to ensure mutually exclusive access.
For non-test applications built for coverage, users will instead select an
output directory as opposed to a single file; each run of the instrumented
executable will emit data files within that directory. Example:
```
$ go build -o myapp.exe -cover ...
$ mkdir /tmp/mycovdata
$ export GOCOVERDIR=/tmp/mycovdata
$ <run test suite, resulting in multiple invocations of myapp.exe>
$ go tool cover -html=/tmp/mycovdata
$
```
For coverage runs in the context of "go test", the default will continue to be
emitting a single named output file when the test is run.
File names within the output directory will be chosen at runtime so as to
minimize the possibility of collisions, e.g. possibly something to the effect of
```
covdata.<metafilehash>.<processid>.<nanotimevalue>.out
```
When invoked for reporting, the coverage tool itself will test its input
argument to see whether it is a file or a directory; in the latter case, it will
read and process all of the files in the specified directory.
### Programs that call os.Exit(), or never terminate
With the current coverage tooling, if a Go unit test invokes `os.Exit()` passing a
non-zero exit status, the instrumented test binary will terminate immediately
without writing an output data file.
If a test invokes `os.Exit()` passing a zero exit status, this will result in a
panic.
For unit tests, this is perfectly acceptable-- people writing tests generally
have no incentive or need to call `os.Exit`, it simply would not add anything in
terms of test functionality.
Real applications routinely finish by calling `os.Exit`, however, including
cases where a non-zero exit status is reported.
Integration test suites nearly always include tests that ensure an application
fails properly (e.g. returns with non-zero exit status) if the application
encounters an invalid input.
The Go project's `all.bash` test suite has many of these sorts of tests,
including test cases that are expected to cause compiler or linker errors (and
to ensure that the proper error paths in the tool are covered).
To support collecting coverage data from such programs, the Go runtime will need
to be extended to detect `os.Exit` calls from instrumented programs and ensure (in
some form) that coverage data is written out before the program terminates.
This could be accomplished either by introducing new hooks into the `os.Exit`
code, or possibly by opening and mmap'ing the coverage output file earlier in
the run, then letting writes to counter variables go directly to an mmap'd
region, which would eliminated the need to close the file on exit (credit to
Austin for this idea).
To handle server programs (which in many cases run forever and may not call
exit), APIs will be provided for writing out a coverage profile under user
control. The first API variants will support writing coverage data to a specific
directory path:
```Go
import "runtime/coverage"
var *coverageoutdir flag.String(...)
func server() {
...
if *coverageoutdir != "" {
// Meta-data is already available on program startup; write it now.
// NB: we're assuming here that the specified dir already exists
if err := coverage.EmitMetaDataToDir(*coverageoutdir); err != nil {
log.Fatalf("writing coverage meta-data: %v")
}
}
for {
...
if *coverageoutdir != "" && <received signal to emit coverage data> {
if err := coverage.EmitCounterDataToDir(*coverageoutdir); err != nil {
log.Fatalf("writing coverage counter-data: %v")
}
}
}
}
```
The second API variants will support writing coverage meta-data and counter data to a user-specified io.Writer (where the io.Writer is presumably backed by a pipe or network connection of some sort):
```Go
import "runtime/coverage"
var *codecoverageflag flag.Bool(...)
func server() {
...
var w io.Writer
if *codecoverageflag {
// Meta-data is already available on program startup; write it now.
w = <obtain destination io.Writer somehow>
if err := coverage.EmitMetaDataToWriter(w); err != nil {
log.Fatalf("writing coverage meta-data: %v")
}
}
for {
...
if *codecoverageflag && <received signal to emit coverage data> {
if err := coverage.EmitCounterDataToWriter(w); err != nil {
log.Fatalf("writing coverage counter-data: %v")
}
}
}
}
```
These APIs will return an error if invoked from within an application not built
with the "-cover" flag.
### Coverage and modules
Most modern Go programs make extensive use of dependent third-party packages;
with the advent of Go modules, we now have systems in place to explicitly
identify and track these dependencies.
When application writers add a third-party dependency, in most cases the authors
will not be interested in having that dependency's code count towards the
"percent of lines covered" metric for their application (there will definitely
be exceptions to this rule, but it should hold in most cases).
It makes sense to leverage information from the Go module system when collecting
code coverage data.
Within the context of the module system, a given package feeding into the build
of an application will have one of the three following dispositions (relative to
the main module):
* Contained: package is part of the module itself (not a dependency)
* Dependent: package is a direct or indirect dependency of the module (appearing in go.mod)
* Stdlib: package is part of the Go standard library / runtime
With this in mind, the proposal when building an application for coverage will
be to instrument every package that feeds into the build, but record the
disposition for each package (as above), then allow the user to select the
proper granularity or treatment of dependencies when viewing or reporting.
As an example, consider the [Delve](https://github.com/go-delve/delve) debugger
(a Go application). One entry in the Delve V1.8 go.mod file is:
github.com/cosiner/argv v0.1.0
This package ("argv") has about 500 lines of Go code and a couple dozen Go
functions; Delve uses only a single exported function.
For a developer trying to generate a coverage report for Delve, it seems
unlikely that they would want to include "argv" as part of the coverage
statistics (percent lines/functions executed), given the secondary and very
modest role that the dependency plays.
On the other hand, it's possible to imagine scenarios in which a specific
dependency plays an integral or important role for a given application, meaning
that a developer might want to include the package in the applications coverage
statistics.
### Merging coverage data output files
As part of this work, the proposal is to provide "go tool" utilities for merging coverage data files, so that collection of coverage data files (emitted from multiple runs of an instrumented executable) can be merged into a single summary output file.
More details are provided below in the section 'Coverage data file tooling'.
### Differential coverage
When fixing a bug in an application, it is common practice to add a new unit
test in addition to the code change that comprises the actual fix.
When using code coverage, users may want to learn how many of the changed lines
in their code are actually covered when the new test runs.
Assuming we have a set of N coverage data output files (corresponding to those
generated when running the existing set of tests for a package) and a new
coverage data file generated from a new testpoint, it would be useful to provide
a tool to "subtract" out the coverage information from the first set from the
second file.
This would leave just the set of new lines / regions that the new test causes to
be covered above and beyond what is already there.
This feature (profile subtraction) would make it much easier to write tooling
that would provide feedback to developers on whether newly written unit tests
are covering new code in the way that the developer intended.
## Design details
This section digs into more of the technical details of the changes needed in
the compiler, runtime, and other parts of the toolchain.
### Package selection when building instrumented applications
In the existing "go test" based coverage design, the default is to instrument only those packages that are specifically selected for testing. Example:
```
$ go test -cover p1 p2/...
...
$
```
In the invocation above, the Go tool reports coverage for package `p1` and for all packages under `p2`, but not for any other packages (for example, any of the various packages imported by `p1`).
When building applications for coverage, the default will be to instrument all packages in the main module for the application being built.
Here is an example using the "delve" debugger (a Go application):
```
$ git clone -b v1.3.2 https://github.com/go-delve/delve
...
$ cd delve
$ go list ./...
github.com/go-delve/delve/cmd/dlv
github.com/go-delve/delve/cmd/dlv/cmds
...
github.com/go-delve/delve/service/rpccommon
github.com/go-delve/delve/service/test
$ fgrep spf13 go.mod
github.com/spf13/cobra v0.0.0-20170417170307-b6cb39589372
github.com/spf13/pflag v0.0.0-20170417173400-9e4c21054fa1 // indirect
$ go build -cover -o dlv.inst.exe ./cmd/dlv
$
```
When the resulting program (`dlv.inst.exe`) is run, it will capture coverage information for just the subset of dependent packages shown in the `go list` command above.
In particular, not coverage will be collected/reported for packages such as `github.com/spf13/cobra` or for packages in the Go standard library (ex: `fmt`).
Users can override this default by passing the `-coverpkg` option to `go build`.
Some additional examples:
```
// Collects coverage for _only_ the github.com/spf13/cobra package
$ go build -cover -coverpkg=github.com/spf13/cobra ./cmd/dlv
// Collects coverage for all packages in the main module and
// all packages listed as dependencies in go.mod
$ go build -cover -coverpkg=mod.deps ./cmd/dlv
// Collects coverage for all packages (including the Go std library)
$ go build -cover -coverpkg=all ./cmd/dlv
$
```
### Coverage instrumentation: compiler or tool?
Performing code coverage instrumentation in the compiler (as opposed to prior to compilation via tooling) has some distinct advantages.
Compiler-based instrumentation is potentially much faster than the tool-based approach, for a start.
In addition, the compiler can apply special treatment to the coverage meta-data variables generated during instrumentation (marking it read-only), and/or provide special treatment for coverage counter variables (ensuring that they are aggregated).
Compiler-based instrumentation also has disadvantages, however.
The "front end" (lexical analysis and parsing) portions of most compilers are typically designed to capture a minimum amount of source position information, just enough to support accurate error reporting, but no more.
For example, in this code:
```Go
L11: func ABC(x int) {
L12: if x < 0 {
L13: bar()
L14: }
L15: }
```
Consider the `{` and `}` tokens on lines 12 and 14. While the compiler will accept these tokens, it will not necessarily create explicit representations for them (with detailed line/column source positions) in its IR, because once parsing is complete (and no syntax errors are reported), there isn't any need to keep this information around (it would just be a waste of memory).
This is a problem for code coverage meta-data generation, since we'd like to record these sorts of source positions for reporting purposes later on (for example, during HTML generation).
This poses a problem: if we change the compiler to capture and hold onto more of this source position info, we risk slowing down compilation overall (even if coverage instrumentation is turned off).
### Hybrid instrumentation
To ensure that coverage instrumentation has all of the source position information it needs, and that we gain some of the benefits of using the compiler, the proposal is to use a hybrid approach: employ source-to-source rewriting for the actual counter annotation/insertion, but then pass information about the counter data structures to the compiler (via a config file) so that the compiler can also play a part.
The `cmd/cover` tool will be modified to operate at the package level and not at the level of an individual source file; the output from the instrumentation process will be a series of modified source files, plus summary file containing things like the the names of generated variables create during instrumentation.
This generated file will be passed to the compiler when the instrumented package is compiled.
The new style of instrumentation will segregate coverage meta-data and coverage counters, so as to allow the compiler to place meta-data into the read-only data section of the instrumented binary.
This segregation will continue when the instrumented program writes out coverage data files at program termination: meta-data and counter data will be written to distinct output files.
### New instrumentation strategy
Consider the following source fragment:
```Go
package p
L4: func small(x, y int) int {
L5: v++
L6: // comment
L7: if y == 0 || x < 0 {
L8: return x
L9: }
L10: return (x << 1) ^ (9 / y)
L11: }
L12:
L13: func medium() {
L14: s1 := small(q, r)
L15: z += s1
L16: s2 := small(r, q)
L17: w -= s2
L18: }
```
For each function, the coverage instrumentater will analyze the function to divide it into "coverable units", where each coverable unit corresponds roughly to a [basic block](https://en.wikipedia.org/wiki/Basic_block).
The instrumenter will create:
1. a chunk of read-only meta-data that stores details on the coverable units
for the function, and
2. an array of counters, one for each coverable unit
Finally, the instrumenter will insert code into each coverable unit to increment or set the appropriate counter when the unit executes.
#### Function meta-data
The function meta-data entry for a unit will include the starting and ending
source position for the unit, along with the number of executable statements in
the unit.
For example, the portion of the meta-data for the function "small" above might
look like
```
Unit File Start End Number of
index pos pos statements
0 F0 L5 L7 2
1 F0 L8 L8 1
```
where F0 corresponds to an index into a table of source files for the package.
At the package level, the compiler will emit code into the package "init"
function to record the blob of meta-data describing all the functions in the
package, adding it onto a global list.
More details on the package meta-data format can be found below.
#### Function counter array
The counter array for each function will be a distinct BSS (uninitialized data)
symbol. These anonymous symbols will be separate entities to ensure that if a
function is dead-coded by the linker, the corresponding counter array is also
removed.
Counter arrays will be tagged by the compiler with an attribute to indentify
them to the Go linker, which will aggregate all counter symbols into a single
section in the output binary.
Although the counter symbol is managed by the compiler as an array, it can be
viewed as a struct of the following form:
```C
struct {
numCtrs uint32
pkgId uint32
funcId uint32
counterArray [numUnits]uint32
}
```
In the struct above, "numCtrs" stores the number of blocks / coverable units
within the function in question, "pkgId" is the ID of the containing package for
the function, "funcId" is the ID or index of the function within the package,
and finally "counterArray" stores the actual coverage counter values for the
function.
The compiler will emit code into the entry basic block of each function that
will store func-specific values for the number of counters (will always be
non-zero), the function ID, and the package ID.
When a coverage-instrumented binary terminates execution and we need to write
out coverage data, the runtime can make a sweep through the counter section for
the binary, and can easily skip over sub-sections corresponding to functions
that were never executed.
### Details on package meta-data symbol format
As mentioned previously, for each instrumented package, the compiler will emit a
blob of meta-data describing each function in the package, with info on the
specific lines in the function corresponding to "coverable" statements.
A package meta-data blob will be a single large RODATA symbol with the following
internal format.
```
Header
File/Func table
... list of function descriptors ...
```
Header information will include:
```
- package path
- number of files
- number of functions
- package classification/disposition relative to go.mod
```
where classification is an enum or set of flags holding the provenance of the
package relative to its enclosing module (described in the "Coverage and
Modules" section above).
The file/function table is basically a string table for the meta-data blob;
other parts of the meta data (header and function descriptors) will refer to
strings by index (order of appearance) in this table.
A function descriptor will take the following form:
```
function name (index into string table)
number of coverable units
<list of entries for each unit>
```
Each entry for a coverable unit will take the form
```
<file> <start line> <end line> <number of statements>
```
As an example, consider the following Go package:
```Go
01: package p
02:
03: var v, w, z int
04:
05: func small(x, y int) int {
06: v++
07: // comment
08: if y == 0 {
09: return x
10: }
11: return (x << 1) ^ (9 / y)
12: }
13:
14: func Medium(q, r int) int {
15: s1 := small(q, r)
16: z += s1
17: s2 := small(r, q)
18: w -= s2
19: return w + z
20: }
```
The meta-data for this package would look something like
```
--header----------
| size: size of this blob in bytes
| packagepath: <path to p>
| module: <modulename>
| classification: ...
| nfiles: 1
| nfunctions: 2
--file + function table------
| <uleb128 len> 4
| <uleb128 len> 5
| <uleb128 len> 6
| <data> "p.go"
| <data> "small"
| <data> "Medium"
--func 1------
| uint32 num units: 3
| uint32 func name: 1 (index into string table)
| <unit 0>: F0 L6 L8 2
| <unit 1>: F0 L9 L9 1
| <unit 2>: F0 L11 L11 1
--func 2------
| uint32 num units: 1
| uint32 func name: 2 (index into string table)
| <unit 0>: F0 L15 L19 5
---end-----------
```
### Details on runtime support
#### Instrumented program execution
When an instrumented executable runs, during package initialization each package
will register a pointer to its meta-data blob onto a global list, so that when
the program terminates we can write out the meta-data for all packages
(including those whose functions were never executed at runtime).
Within an instrumented function, the prolog for the function will have
instrumentation code to:
* record the number of counters, function ID, and package ID in the initial
portion of the counter array
* update the counter for the prolog basic block (either set a bit, increment a
counter, or atomically increment a counter)
#### Instrumented program termination
When an instrumented program terminates, or when some other event takes place
that requires emitting a coverage data output file, the runtime routines
responsible will open an output file in the appropriate directory (name chosen
to minimize the possibility of collisions) and emit an output data file.
### Coverage data file format
The existing Go cmd/cover uses a text-based output format when emitting coverage
data files.
For the example package "p" given in the “Details on compiler changes” section
above, the output data might look like this:
```
mode: set
cov-example/p/p.go:5.26,8.12 2 1
cov-example/p/p.go:11.2,11.27 1 1
cov-example/p/p.go:8.12,10.3 1 1
cov-example/p/p.go:14.27,20.2 5 1
```
Each line is a package path, source position info (file/line/col) for a basic
block, statement count, and an execution count (or 1/0 boolean value indicating
"hit" or "not hit").
This format is simple and straightforward to digest for reporting tools, but is
also somewhat space-inefficient.
The proposal is to switch to a binary data file format, but provide tools for
easily converting a binary file to the existing legacy format.
Exact details of the new format are still TBD, but very roughly: it should be
possible to just have a file header with information on the
execution/invocation, then a series of package meta-data blobs (drawn directly
from the corresponding rodata symbols in the binary).
Counter data will be written to a separate file composed of a header followed by a series of counter blocks, one per function.
Each counter data file will store the hash of the meta-data file that it is assocated with.
Counter file header information will include items such as:
* binary name
* module name
* hash of meta-data for program
* process ID
* nanotime at point where data is emitted
Since the meta-data portion of the coverage output will be invariant from run to run of a given instrumented executable, at the point where an instrumented program terminates, if it sees that a meta-data file with the proper hash and length already exists, then it can avoid the meta-data writing step and only emit a counter data file.
### Coverage data file tooling
A new tool, `covdata`, will be provided for manipulating coverage data files generated from runs of instrumented executables.
The covdata tool will support merging, dumping, conversion, substraction, intersection, and other operations.
#### Merge
The covdata `merge` subcommands reads data files from a series of input directories and merges them together into a single output directory.
Example usage:
```
// Run an instrumented program twice.
$ mkdir /tmp/dir1 /tmp/dir2
$ GOCOVERDIR=/tmp/dir1 ./prog.exe <first set of inputs>
$ GOCOVERDIR=/tmp/dir2 ./prog.exe <second set of inputs>
$ ls /tmp/dir1
covcounters.7927fd1274379ed93b11f6bf5324859a.592186.1651766123343357257
covmeta.7927fd1274379ed93b11f6bf5324859a
$ ls /tmp/dir2
covcounters.7927fd1274379ed93b11f6bf5324859a.592295.1651766170608372775
covmeta.7927fd1274379ed93b11f6bf5324859a
// Merge the both directories into a single output dir.
$ mkdir final
$ go tool covdata merge -i=/tmp/dir1,/tmp/dir1 -o final
```
#### Conversion to legacy text format
The `textfmt` subcommand reads coverage data files in the new format and emits a an equivalent file in the existing text format supported by `cmd/cover`. The resulting text files can then be used for reporting using the existing workflows.
Example usage (continuing from above):
```
// Convert coverage data from directory 'final' into text format.
$ go tool covdata textfmt -i=final -o=covdata.txt
$ head covdata.txt
mode: set
cov-example/p/p.go:7.22,8.2 0 0
cov-example/p/p.go:10.31,11.2 1 0
cov-example/p/p.go:11.3,13.3 0 0
cov-example/p/p.go:14.3,16.3 0 0
cov-example/p/p.go:19.33,21.2 1 1
cov-example/p/p.go:23.22,25.2 1 0
...
$ go tool cover -func=covdata.txt | head
cov-example/main/main.go:12: main 90.0%
cov-example/p/p.go:7: emptyFn 0.0%
...
$
```
## Possible Extensions, Future Work, Limitations
### Tagging coverage profiles to support test "origin" queries
For very large applications, it is unlikely that any individual developer has a
complete picture of every line of code in the application, or understands the
exact set of tests that exercise a given region of source code.
When working with unfamiliar code, a common question for developers is, "Which
test or tests caused this line to be executed?".
Currently Go coverage tooling does not provide support for gathering or
reporting this type of information.
For this use case, one way to provide help to users would be to introduce the
idea of coverage data "labels" or "tags", analagous to the [profiling
labels](https://github.com/golang/proposal/blob/master/design/17280-profile-labels.md)
feature for CPU and memory profiling.
The idea would be to associate a set of tags with the execution of a given
execution of a coverage-instrumented binary.
Tags applied by default would include the values of GOOS + GOARCH, and in the
case of "go test" run, and the name of the specific Go test being executed.
The coverage runtime support would capture and record tags for each program or
test execution, and then the reporting tools would provide a way to build a
reverse index (effectively mapping each covered source file line to a set of
tags recorded for its execution).
This is (potentially) a complicated feature to implement given that there are
many different ways to write tests and to structure or organize them.
Go's all.bash is a good example; in addition to the more well-behaved tests like
the standard library package tests, there are also tests that shell out to run
other executables (ex: "go build ...") and tests that operate outside of the
formal "testing" package framework (for example, those executed by
$GOROOT/test/run.go).
In the specific case of Go unit tests (using the "testing" package), there is
also the problem that the package test is a single executable, thus would
produce a single output data file unless special arrangements were made.
One possibility here would be to arrange for a testing mode in which the testing
runtime would clear all of the coverage counters within the executable prior to
the invocation of a given testpoint, then emit a new data file after the
testpoint is complete (this would also require serializing the tests).
### Intra-line coverage
Some coverage tools provide details on control flow within a given source line,
as opposed to only at the line level.
For example, in the function from the previous section:
```Go
L4: func small(x, y int) int {
L5: v++
L6: // comment
L7: if y == 0 || *x < 0 {
L8: return x
L9: }
L10: return (x << 1) ^ (9 / y)
L11: }
```
For line 7 above, it can be helpful to report not just whether the line itself
was executed, but which portions of the conditional within the line.
If the condition “y == 0” is always true, and the "*x < 0" test is never
executed, this may be useful to the author/maintainer of the code.
Doing this would require logically splitting the line into two pieces, then
inserting counters for each piece. Each piece can then be reported separately;
in the HTML report output you might see something like:
```
L7a: if y == 0
L7b: || *x < 0 {
```
where each piece would be reported/colored separately.
Existing commercial C/C++ coverage tools (ex: Bullseye) provide this feature
under an option.
### Function-level coverage
For use cases that are especially sensitive to runtime overhead, there may be
value in supporting collection of function-level coverage data, as opposed to
line-level coverage data.
This reduction in granularity would decrease the size of the compiler-emitted
meta-data as well as the runtime overhead (since only a single counter or bit
would be required for a given function).
### Taking into account panic paths
When a Go program invokes panic(), this can result in basic blocks (coverable
units) that are only partially executed, meaning that the approach outlined in
the previous sections can report misleading data. For example:
```Go
L1: func myFunc() int {
L2: defer func() { cleanup() }()
L3: dosomework()
L4: mayPanic()
L5: morework()
L6: if condition2 {
L7: launchOperation()
L8: }
L9: return x
L10: }
```
In the current proposal, the compiler would insert two counters for this
function, one in the function entry and one in the block containing
“`launchOperation()`”.
If it turns out that the function `mayPanic()` always panics, then the reported
coverage data will show lines 5 and 6 above as covered, when in fact they never
execute.
This limitation also exists in the current source-to-source translation based
implementation of cmd/cover.
The limitation could be removed if the compiler were to treat each function call as
ending a coverable unit and beginning a new unit.
Doing this would result in a (probably very substantial) increase in the number
of counters and the size of the meta-data, but would eliminate the drawback in
question.
A number of existing coverage testing frameworks for other languages also have
similar limitations (for example, that of LLVM/clang), and it is an open
question as to how many users would actually make use this feature if it were
available. There is at least one open issue for this problem.
### Source file directives
It is worth noting that when recording position info, the compiler may need to
have special treatment for file/line directives.
For example, when compiling this package:
```Go
foo.go:
package foo
//line apple.go:101:2
func orange() {
}
//line peach.go:17:1
func something() {
}
```
If the line directives were to be honored when creating coverage reports
(particularly HTML output), it might be difficult for users to make sense of the
output.
# Implementation timetable
Plan is for thanm@ to implement this in go 1.19 timeframe.
# Prerequisite Changes
N/A
# Preliminary Results
No data available yet.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/47916-parameterized-go-types.md | # Additions to go/types to support type parameters
Authors: Rob Findley, Robert Griesemer
Last updated: 2021-08-17
## Abstract
This document proposes changes to `go/types` to expose the additional type information introduced by the type parameters proposal ([#43651](https://golang.org/issues/43651)), including the amendment for type sets ([#45346](https://golang.org/issues/45346)).
The goal of these changes is to make it possible for authors to write tools that understand generic code, while staying compatible and consistent with the existing `go/types` API.
This proposal assumes familiarity with the existing `go/types` API.
## Extensions to the type system
The [type parameters proposal] has a nice synopsis of the proposed language changes; here is a brief description of the extensions to the type system:
- Defined types may be _parameterized_: they may have one or more type parameters in their declaration: `type N[T any] ...`.
- Methods on parameterized types have _receiver type parameters_, which parallel the type parameters from the receiver type declaration: `func (r N[T]) m(...)`.
- Non-method functions may be parameterized, meaning they can have one or more type parameters in their declaration: `func f[T any](...)`.
- Each type parameter has a _type constraint_, which is an interface type: `type N[T interface{ m() }] ...`.
- Interface types that are used only as constraints are permitted new embedded elements that restrict the set of types that may implement them: `type N[T interface{ ~int|string }] ...`.
- The `interface{ ... }` wrapper may be elided for constraint interface literals containing a single embedded element. For example `type N[T ~int|string]` is equivalent to `type N[T interface{~int|string}]`.
- A new predeclared interface type `comparable` is implemented by all types for which the `==` and `!=` operators may be used.
- A new predeclared interface type `any` is a type alias for `interface{}`.
- A parameterized (defined) type may be _instantiated_ by providing type arguments: `type S N[int]; var x N[string]`.
- A parameterized function may be instantiated by providing explicit type arguments, or via type inference.
## Proposal
The sections below describe new types and functions to be added, as well as how they interact with existing `go/types` APIs.
### Type parameters and the `types.TypeParam` Type
```go
func NewTypeParam(obj *TypeName, constraint Type) *TypeParam
func (*TypeParam) Constraint() Type
func (*TypeParam) SetConstraint(Type)
func (*TypeParam) Obj() *TypeName
func (*TypeParam) Index() int
// Underlying and String implement Type.
func (*TypeParam) Underlying() Type
func (*TypeParam) String() string
```
Within type and function declarations, type parameters names denote type parameter types, represented by the new `TypeParam` type. It is a `Type` with two additional methods: `Constraint`, which returns its type constraint (which may be a `*Named` or `*Interface`), and `SetConstraint` which may be used to set its type constraint. The `SetConstraint` method is necessary to break cycles in situations where the constraint type references the type parameter itself.
For a `*TypeParam`, `Underlying` returns the underlying type of its constraint, and `String` returns its name.
Type parameter names are represented by a `*TypeName` with a `*TypeParam`-valued `Type()`. They are declared by type parameter lists, or by type parameters on method receivers. Type parameters are scoped to the type or function declaration on which they are defined. Notably, this introduces a new `*Scope` for parameterized type declarations (for parameterized function declarations the scope is the function scope). The `Obj()` method returns the `*TypeName` corresponding to the type parameter (its receiver). The `Index()` method returns the index of the type parameter in its type parameter list, or `-1` if the type parameter has not yet been bound to a type.
The `NewTypeParam` constructor creates a new type parameter with a given `*TypeName` and type constraint.
For a method on a parameterized type, each receiver type parameter in the method declaration also defines a new `*TypeParam`, with a `*TypeName` object scoped to the function. The number of receiver type parameters and their constraints matches the type parameters on the receiver type declaration.
Just as with any other `Object`, definitions and uses of type parameter names are recorded in `Info.Defs` and `Info.Uses`.
Type parameters are considered identical (as reported by the `Identical` function) if and only if they satisfy pointer equality. However, see the section on `Signature` below for some discussion of identical type parameter lists.
### Type parameter and type argument lists
```go
type TypeParamList struct { /* ... */ }
func (*TypeParamList) Len() int
func (*TypeParamList) At(i int) *TypeParam
type TypeList struct { /* ... */ }
func (*TypeList) Len() int
func (*TypeList) At(i int) Type
```
A `TypeParamList` type is added to represent lists of type parameters. Similarly, a `TypeList` type is added to represent lists of type arguments. Both types have a `Len` and `At` methods, with the only difference between them being the type returned by `At`.
### Changes to `types.Named`
```go
func (*Named) TypeParams() *TypeParamList
func (*Named) SetTypeParams([]*TypeParam)
func (*Named) TypeArgs() *TypeList
func (*Named) Origin() *Named
```
The `TypeParams` and `SetTypeParams` methods are added to `*Named` to get and set type parameters. Once a type parameter has been passed to `SetTypeParams`, it is considered _bound_ and must not be used in any subsequent calls to `Named.SetTypeParams` or `Signature.SetTypeParams`; doing so will panic. For non-parameterized types, `TypeParams` returns nil. Note that `SetTypeParams` is necessary to break cycles in the case that type parameter constraints refer to the type being defined.
When a `*Named` type is instantiated (see [instantiation](#instantiation) below), the result is another `*Named` type which retains the original type parameters but gains type arguments. These type arguments are substituted in the underlying type of the origin type to produce a new underlying type. Similarly, type arguments are substituted for the corresponding receiver type parameter in method declarations to produce a new method type.
These type arguments can be accessed via the `TypeArgs` method. For non-instantiated types, `TypeArgs` returns nil.
For instantiated types, the `Origin` method returns the parameterized type that was used to create the instance. For non-instantiated types, `Origin` returns the receiver.
For an instantiated type `t`, `t.Obj()` is equivalent to `t.Origin().Obj()`.
As an example, consider the following code:
```go
type N[T any] struct { t T }
func (N[T]) m()
type _ = N[int]
```
After type checking, the type `N[int]` is a `*Named` type with the same type parameters as `N`, but with type arguments of `{int}`. `Underlying()` of `N[int]` is `struct { t int }`, and `Method(0)` of `N[int]` is a new `*Func`: `func (N[int]) m()`.
Parameterized named types continue to be considered identical (as reported by the `Identical` function) if they satisfy pointer equality. Instantiated named types are considered identical if their origin types are identical and their type arguments are pairwise identical. Instantiating twice with the same origin type and type arguments _may_ result in pointer-identical `*Named` instances, but this is not guaranteed. There is further discussion of this in the [instantiation](#instantiation) section below.
### Changes to `types.Signature`
```go
func NewSignatureType(recv *Var, recvTypeParams, typeParams []*TypeParam, params, results *Tuple, variadic bool) *Signature
func (*Signature) TypeParams() *TypeParamList
func (*Signature) RecvTypeParams() *TypeParamList
```
A new constructor `NewSignatureType` is added to create `*Signature` types that use type parameters, deprecating the existing `NewSignature`. The `TypeParams` and method is added to `*Signature` to get type parameters. The `RecvTypeParams` method is added to get receiver type parameters. Signatures cannot have both type parameters and receiver type parameters, and passing both to `NewSignatureType` will panic. Just as with `*Named` types, type parameters can only be bound once: passing a type parameter more than once to either `Named.SetTypeParams` or `NewSignatureType` will panic.
For generic `Signatures` to be identical (as reported by `Identical`), they must be identical but for renaming of type parameters.
### Changes to `types.Interface`
```go
func (*Interface) IsComparable() bool
func (*Interface) IsMethodSet() bool
func (*Interface) IsImplicit() bool
func (*Interface) MarkImplicit()
```
The `*Interface` type gains two methods to answer questions about its type set:
- `IsComparable` reports whether every element of its type set is comparable, which could be the case if the interface is explicitly restricted to comparable types, or if it embeds the special interface `comparable`.
- `IsMethodSet` reports whether the interface is fully described by its method set; that is to say, does not contain any type restricting embedded elements that are not just methods.
To understand the specific type restrictions of an interface, users may access embedded elements via the existing `EmbeddedType` API, along with the new `Union` type below. Notably, this means that `EmbeddedType` may now return any kind of `Type`.
Interfaces are identical if their type sets are identical. See the [draft spec](https://golang.org/cl/294469) for details on type sets.
To represent implicit interfaces in constraint position, `*Interface` gains an `IsImplicit` accessor. The `MarkImplicit` method may be used to mark interfaces as implicit during importing. `MarkImplicit` is idempotent.
The existing `Interface.Empty` method returns true if the interface has no type restrictions and has an empty method set (alternatively: if its type set is the set of all types).
### The `Union` type
```go
type Union struct { /* ... */ }
func NewUnion([]*Term) *Union
func (*Union) Len() int
func (*Union) Term(int) *Term
// Underlying and String implement Type.
func (*Union) Underlying() Type
func (*Union) String() string
type Term struct { /* ... */ }
func NewTerm(bool, Type) *Term
func (*Term) Tilde() bool
func (*Term) Type() Type
func (*Term) String() string
```
A new `Union` type is introduced to represent the type expression `T1 | T2 | ... | Tn`, where `Ti` is a tilde term (`T` or `~T`, for type `T`). A new `Term` type represents the tilde terms `Ti`, with a `Type` method to access the term type and a `Tilde` method to report if a tilde was present.
The `Len` and `Term` methods may be used to access terms in the union. Unions represent their type expression syntactically: after type checking the union terms will correspond 1:1 to the term expressions in the source, though their order is not guaranteed to be the same. Unions should only appear as embedded elements in interfaces; this is the only place they will appear after type checking, and their behavior when used elsewhere is undefined.
Unions are identical if they describe the same type set. For example `~int | string` is identical to both `string | ~int` and `int | string | ~int`.
### Instantiation
```go
func Instantiate(ctxt *Context, orig Type, targs []Type, verify bool) (Type, error)
type ArgumentError struct {
Index int
Err error
}
func (*ArgumentError) Error() string
func (*ArgumentError) Unwrap() error
type Context struct { /* ... */ }
func NewContext() *Context
type Config struct {
// ...
Context *Context
}
```
A new `Instantiate` function is added to allow the creation of type and function instances. The `orig` argument supplies the parameterized `*Named` or `*Signature` type being instantiated, and the `targs` argument supplies the type arguments to be substituted for type parameters. It is an error to call `Instantiate` with anything other than a `*Named` or `*Signature` type for `orig`, or with a `targs` value that has length different from the number of type parameters on the parameterized type; doing so will result in a non-nil error being returned.
If `verify` is true, `Instantiate` will verify that type arguments satisfy their corresponding type parameter constraint. If they do not, the returned error will be non-nil and may wrap an `*ArgumentError`. `ArgumentError` is a new type used to represent an error associated with a specific argument index.
If `orig` is a `*Named` or `*Signature` type, the length of `targs` matches the number of type parameters, and `verify` is false, `Instantiate` will return a nil error.
A `Context` type is introduced to represent an opaque type checking context. This context may be passed as the first argument to `Instantiate`, or as a field on `Checker`. When a single non-nil `ctxt` argument is used for subsequent calls to `Instantiate`, identical instantiations may re-use existing type instances. Similarly, passing a non-nil `Context` to `Config` may result in type instances being re-used during the type checking pass. This is purely a memory optimization, and callers may not rely on pointer identity for instances: they must still use `Identical` when comparing instantiated types.
### Instance information
```go
type Info struct {
// ...
Instances map[*ast.Ident]Instance
}
type Instance struct {
TypeArgs *TypeList
Type Type
}
```
Whenever a type or function is instantiated (via explicit instantiation or type inference), we record information about the instantiation in a new `Instances` map on the `Info` struct. This maps the identifier denoting the parameterized function or type in an instantiation expression to the type arguments used in instantiation and resulting instantiated `*Named` or `*Signature` type. For example:
- In the explicit type instantiation `T[int, string]`, `Instances` maps the identifier for `T` to the type arguments `int, string` and resulting `*Named` type.
- Given a parameterized function declaration `func F[P any](P)` and a call expression `F(int(1))`, `Instances` would map the identifier for `F` in the call expression to the type argument `int`, and resulting `*Signature` type.
Notably, instantiating the type returned by `Uses[id].Type()` with the type arguments `Instances[id].TypeArgs` results in a type that is identical to the type `Instances[id].Type`.
The `Instances` map serves several purposes:
- Providing a mechanism for finding all instances. This could be useful for applications like code generation or go/ssa.
- Mapping an instance back to positions where it occurs, for the purpose of e.g. presenting diagnostics.
- Finding inferred type arguments.
### `comparable` and `any`
The new predeclared interfaces `comparable` and `any` are declared in the `Universe` scope.
[type parameters proposal]: https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md
[type set proposal]: https://golang.org/issues/45346
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/40307-fuzzing.md | Moved to [golang.org/s/draft-fuzzing-design](https://golang.org/s/draft-fuzzing-design).
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/59960-heap-hugepage-util.md | # A more hugepage-aware Go heap
Authors: Michael Knyszek, Michael Pratt
## Background
[Transparent huge pages (THP) admin
guide](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html).
[Go scavenging
policy](30333-smarter-scavenging.md#which-memory-should-we-scavenge).
(Implementation details are out-of-date, but linked policy is relevant.)
[THP flag behavior](#appendix_thp-flag-behavior).
## Motivation
Currently, Go's hugepage-related policies [do not play well
together](https://github.com/golang/go/issues/55328) and have bit-rotted.[^1]
The result is that the memory regions the Go runtime chooses to mark as
`MADV_NOHUGEPAGE` and `MADV_HUGEPAGE` are somewhat haphazard, resulting in
memory overuse for small heaps.
The memory overuse is upwards of 40% memory overhead in some cases.
Turning off huge pages entirely fixes the problem, but leaves CPU performance on
the table.
This policy also means large heaps might have dense sections that are
erroneously mapped as `MADV_NOHUGEPAGE`, costing up to 1% throughput.
The goal of this work is to eliminate this overhead for small heaps while
improving huge page utilization for large heaps.
[^1]: [Large allocations](https://cs.opensource.google/go/go/+/master:src/runtime/mheap.go;l=1344;drc=c70fd4b30aba5db2df7b5f6b0833c62b909f50eb)
will force [a call to `MADV_HUGEPAGE` for any aligned huge pages
within](https://cs.opensource.google/go/go/+/master:src/runtime/mem_linux.go;l=148;drc=9839668b5619f45e293dd40339bf0ac614ea6bee),
while small allocations tend to leave memory in an undetermined state for
huge pages.
The scavenger will try to release entire aligned hugepages at a time.
Also, when any memory is released, [we `MADV_NOHUGEPAGE` any aligned pages
in the range we
release](https://cs.opensource.google/go/go/+/master:src/runtime/mem_linux.go;l=40;drc=9839668b5619f45e293dd40339bf0ac614ea6bee).
However, the scavenger will [only release 64 KiB at a time unless it finds
an aligned huge page to
release](https://cs.opensource.google/go/go/+/master:src/runtime/mgcscavenge.go;l=564;drc=c70fd4b30aba5db2df7b5f6b0833c62b909f50eb),
and even then it'll [only `MADV_NOHUGEPAGE` the corresponding huge pages if
the region it's scavenging crosses a huge page
boundary](https://cs.opensource.google/go/go/+/master:src/runtime/mem_linux.go;l=70;drc=9839668b5619f45e293dd40339bf0ac614ea6bee).
## Proposal
One key insight in the design of the scavenger is that the runtime always has a
good idea of how much memory will be used soon: the total heap footprint for a
GC cycle is determined by the heap goal. [^2]
[^2]: The runtime also has a first-fit page allocator so that the scavenger can
take pages from the high addresses in the heap, again to reduce the chance
of conflict.
The scavenger tries to return memory to the OS such that it leaves enough
paged-in memory around to reach the heap goal (adjusted for fragmentation
within spans and a 10% buffer for fragmentation outside of spans, or capped
by the memory limit).
The purpose behind this is to reduce the chance that the scavenger will
return memory to the OS that will be used soon.
Indeed, by [tracing page allocations and watching page state over
time](#appendix_page-traces) we can see that Go heaps tend to get very dense
toward the end of a GC cycle; this makes all of that memory a decent candidate
for huge pages from the perspective of fragmentation.
However, it's also clear this density fluctuates significantly within a GC
cycle.
Therefore, I propose the following policy:
1. All new memory is initially marked as `MADV_HUGEPAGE` with the expectation
that it will be used.
1. Before the scavenger releases pages in an aligned 4 MiB region of memory [^3]
it [first](#appendix_thp-flag-behavior) marks it as `MADV_NOHUGEPAGE` if it
isn't already marked as such.
- If `max_ptes_none` is 0, then skip this step.
1. Aligned 4 MiB regions of memory are only available to scavenge if they
weren't at least 96% [^4] full at the end of the last GC cycle. [^5]
- Scavenging for `GOMEMLIMIT` or `runtime/debug.FreeOSMemory` ignores this
rule.
1. Any aligned 4 MiB region of memory that exceeds 96% occupancy is immediately
marked as `MADV_HUGEPAGE`.
- If `max_ptes_none` is 0, then use `MADV_COLLAPSE` instead, if available.
- Memory scavenged for `GOMEMLIMIT` or `runtime/debug.FreeOSMemory` is not
marked `MADV_HUGEPAGE` until the next allocation that causes this
condition after the end of the current GC cycle. [^6]
[^3]: 4 MiB doesn't align with linux/amd64 huge page sizes, but is a very
convenient number of the runtime because the page allocator manages memory
in 4 MiB chunks.
[^4]: The bar for explicit (non-default) backing by huge pages must be very
high.
The main issue is the default value of
`/sys/kernel/mm/transparent_hugepage/defrag` on Linux: it forces regions
marked as `MADV_HUGEPAGE` to be immediately backed, stalling in the kernel
until it can compact and rearrange things to provide a huge page.
Meanwhile the combination of `MADV_NOHUGEPAGE` and `MADV_DONTNEED` does the
opposite.
Switching between these two states often creates really expensive churn.
[^5]: Note that `runtime/debug.FreeOSMemory` and the mechanism to maintain
`GOMEMLIMIT` must still be able to release all memory to be effective.
For that reason, this rule does not apply to those two situations.
Basically, these cases get to skip waiting until the end of the GC cycle,
optimistically assuming that memory won't be used.
[^6]: It might happen that the wrong memory was scavenged (memory that soon
after exceeds 96% occupancy).
This delay helps reduce churn.
The goal of these changes is to ensure that when sparse regions of the heap have
their memory returned to the OS, it stays that way regardless of
`max_ptes_none`.
Meanwhile, the policy avoids expensive churn by delaying the release of pages
that were part of dense memory regions by at least a full GC cycle.
Note that there's potentially quite a lot of hysteresis here, which could impact
memory reclaim for, for example, a brief memory spike followed by a long-lived
idle low-memory state.
In the worst case, the time between GC cycles is 2 minutes, and the scavenger's
slowest return rate is ~256 MiB/sec. [^7] I suspect this isn't slow enough to be
a problem in practice.
Furthermore, `GOMEMLIMIT` can still be employed to maintain a memory maximum.
[^7]: The scavenger is much more aggressive than it once was, targeting 1% of
total CPU usage.
Spending 1% of one CPU core in 2018 on `MADV_DONTNEED` meant roughly 8 KiB
released per millisecond in the worst case.
For a `GOMAXPROCS=32` process, this worst case is now approximately 256 KiB
per millisecond.
In the best case, wherein the scavenger can identify whole unreleased huge
pages, it would release 2 MiB per millisecond in 2018, so 64 MiB per
millisecond today.
## Alternative attempts
Initially, I attempted a design where all heap memory up to the heap goal
(address-ordered) is marked as `MADV_HUGEPAGE` and ineligible for scavenging.
The rest is always eligible for scavenging, and the scavenger marks that memory
as `MADV_NOHUGEPAGE`.
This approach had a few problems:
1. The heap goal tends to fluctuate, creating churn at the boundary.
1. When the heap is actively growing, the aftermath of this churn actually ends
up in the middle of the fully-grown heap, as the scavenger works on memory
beyond the boundary in between GC cycles.
1. Any fragmentation that does exist in the middle of the heap, for example if
most allocations are large, is never looked at by the scavenger.
I also tried a simple heuristic to turn off the scavenger when it looks like the
heap is growing, but not all heaps grow monotonically, so a small amount of
churn still occurred.
It's difficult to come up with a good heuristic without assuming monotonicity.
My next attempt was more direct: mark high density chunks as `MADV_HUGEPAGE`,
and allow low density chunks to be scavenged and set as `MADV_NOHUGEPAGE`.
A chunk would become high density if it was observed to have at least 80%
occupancy, and would later switch back to low density if it had less than 20%
occupancy.
This gap existed for hysteresis to reduce churn.
Unfortunately, this also didn't work: GC-heavy programs often have memory
regions that go from extremely low (near 0%) occupancy to 100% within a single
GC cycle, creating a lot of churn.
The design above is ultimately a combination of these two designs: assume that
the heap gets generally dense within a GC cycle, but handle it on a
chunk-by-chunk basis.
Where all this differs from other huge page efforts, such as [what TCMalloc
did](https://google.github.io/tcmalloc/temeraire.html), is the lack of
bin-packing of allocated memory in huge pages (which is really the majority and
key part of the design).
Bin-packing provides the benefit of increasing the likelihood that an entire
huge page will be free by putting new memory in existing huge pages over some
global policy that may put it anywhere like "best-fit."
This not only improves the efficiency of releasing memory, but makes the overall
footprint smaller due to less fragmentation.
This is unlikely to be that useful for Go since Go's heap already, at least
transiently, gets very dense.
Another thing that gets in the way of doing the same kind of bin-packing for Go
is that the allocator's slow path gets hit much harder than TCMalloc's slow
path.
The reason for this boils down to the GC memory reuse pattern (essentially, FIFO
vs. LIFO reuse).
Slowdowns in this path will likely create scalability problems.
## Appendix: THP flag behavior
Whether or not pages are eligible for THP is controlled by a combination of
settings:
`/sys/kernel/mm/transparent_hugepage/enabled`: system-wide control, possible
values:
- `never`: THP disabled
- `madvise`: Only pages with `MADV_HUGEPAGE` are eligible
- `always`: All pages are eligible, unless marked `MADV_NOHUGEPAGE`
`prctl(PR_SET_THP_DISABLE)`: process-wide control to disable THP
`madvise`: per-mapping control, possible values:
- `MADV_NOHUGEPAGE`: mapping not eligible for THP
- Note that existing huge pages will not be split if this flag is set.
- `MADV_HUGEPAGE`: mapping eligible for THP unless there is a process- or
system-wide disable.
- Unset: mapping eligible for THP if system-wide control is set to “always”.
`/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none`: system-wide
control that specifies how many extra small pages can be allocated when
collapsing a group of pages into a huge page.
In other words, how many small pages in a candidate huge page can be
not-faulted-in or faulted-in zero pages.
`MADV_DONTNEED` on a smaller range within a huge page will split the huge page
to zero the range.
However, the full huge page range will still be immediately eligible for
coalescing by `khugepaged` if `max_ptes_none > 0`, which is true for the default
open source Linux configuration.
Thus to both disable future THP and split an existing huge page race-free, you
must first set `MADV_NOHUGEPAGE` and then call `MADV_DONTNEED`.
Another consideration is the newly-upstreamed `MADV_COLLAPSE`, which collapses
memory regions into huge pages unconditionally.
`MADV_DONTNEED` can then used to break them up.
This scheme represents effectively complete control over huge pages, provided
`khugepaged` doesn't coalesce pages in a way that undoes the `MADV_DONTNEED`.
(For example by setting `max_ptes_none` to zero.)
## Appendix: Page traces
To investigate this issue I built a
[low-overhead](https://perf.golang.org/search?q=upload:20221024.9) [page event
tracer](https://go.dev/cl/444157) and [visualization
utility](https://go.dev/cl/444158) to check assumptions of application and GC
behavior.
Below are a bunch of traces and conclusions from them.
- [Tile38 K-Nearest benchmark](./59960/tile38.png): GC-heavy benchmark.
Note the fluctuation between very low occupancy and very high occupancy.
During a single GC cycle, the page heap gets at least transiently very dense.
This benchmark caused me the most trouble when trying out ideas.
- [Go compiler building a massive package](./59960/compiler.png): Note again the
high density.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/47781-parameterized-go-ast.md | # Additions to go/ast and go/token to support parameterized functions and types
Authors: Rob Findley, Robert Griesemer
Last Updated: 2021-08-18
## Abstract
This document proposes changes to `go/ast` to store the additional syntactic information necessary for the type parameters proposal ([#43651](https://golang.org/issues/43651)), including the amendment for type sets ([#45346](https://golang.org/issues/45346)). The changes to `go/types` related to type checking are discussed in a [separate proposal](https://golang.org/cl/328610).
## Syntax Changes
See the [type parameters proposal] for a full discussion of the language changes to support parameterized functions and types, but to summarize the changes in syntax:
- Type and function declarations get optional _type parameters_, as in `type List[T any] ...` or `func f[T1, T2 any]() { ... }`. Type parameters are a [parameter list].
- Parameterized types may be _instantiated_ with one or more _type arguments_, to make them non-parameterized type expressions, as in `l := &List[int]{}` or `type intList List[int]`. Type arguments are an [expression list].
- Parameterized functions may be instantiated with one or more type arguments when they are called or used as function values, as in `g := f[int]` or `x := f[int]()`. Function type arguments are an [expression list].
- Interface types can have new embedded elements that restrict the set of types that may implement them, for example `interface { ~int64|~float64 }`. Such elements are type expressions of the form `T1 | T2 ... Tn` where each term `Ti` stands for a type or a `~T` where T is a type.
## Proposal
The sections below describe new types and functions to be added, as well as their invariants. For a detailed discussion of these design choices, see the [appendix](#appendix_considerations-for-api-changes-to-go_ast).
### For type parameters in type and function declarations
```go
type TypeSpec struct {
// ...existing fields
TypeParams *FieldList
}
type FuncType struct {
// ...existing fields
TypeParams *FieldList
}
```
To represent type parameters in type and function declarations, both `ast.TypeSpec` and `ast.FuncType` gain a new `TypeParams *FieldList` field, which will be nil in the case of non-parameterized types and functions.
### For type and function instantiation
To represent both type and function instantiation with type arguments, we introduce a new node type `ast.IndexListExpr`, which is an `Expr` node similar to `ast.IndexExpr`, but with a slice of indices rather than a single index:
```go
type IndexListExpr struct {
X Expr
Lbrack token.Pos
Indices []Expr
Rbrack token.Pos
}
func (*IndexListExpr) End() token.Pos
func (*IndexListExpr) Pos() token.Pos
```
Type and function instance expressions will be parsed into a single `IndexExpr` if there is only one index, and an `IndexListExpr` if there is more than one index. Specifically, when encountering an expression `f[expr1, ..., exprN]` with `N` argument expressions, we parse as follows:
1. If `N == 1`, as in normal index expressions `f[expr]`, we parse an `IndexExpr`.
2. If `N > 1`, parse an `IndexListExpr` with `Indices` set to the parsed expressions `expr1, …, exprN`
3. If `N == 0`, as in the invalid expression `f[]`, we parse an `IndexExpr` with `BadExpr` for its `Index` (this matches the current behavior for invalid index expressions).
There were several alternatives considered for representing this syntax. At least two of these alternatives were implemented. They are worth discussing:
- Add a new `ListExpr` node type that holds an expression list, to serve as the `Index` field for an `IndexExpr` when `N >= 2`. This is an elegant solution, but results in inefficient storage and, more importantly, adds a new node type that exists only to alter the meaning of an existing node. This is inconsistent with the design of other nodes in `go/ast`, where additional nodes are preferred to overloading existing nodes. Compare with `RangeStmt` and `TypeSwitchStmt`, which are distinct nodes in `go/ast`. Having distinct nodes is generally easier to work with, as each node has a more uniform composition.
- Overload `ast.CallExpr` to have a `Brackets bool` field, so `f[T]` would be analogous to `f(T)`, but with `Brackets` set to `true`. This is roughly equivalent to the `IndexListExpr` node, and allows us to avoid adding a new type. However, it overloads the meaning of `CallExpr` and adds an additional field.
- Add an `Tail []Expr` field to `IndexExpr` to hold additional type arguments. While this avoids a new node type, it adds an extra field to IndexExpr even when not needed.
### For type restrictions
```go
package token
const TILDE Token = 88
```
The new syntax for type restrictions in interfaces can be represented using existing node types.
We can represent the expression `~T1|T2 |~T3` in `interface { ~T1|T2|~T3 }` as a single embedded expression (i.e. an `*ast.Field` with empty `Names`), consisting of unary and binary expressions. Specifically, we can introduce a new token `token.TILDE`, and represent `~expr` as an `*ast.UnaryExpr` where `Op` is `token.TILDE`. We can represent `expr1|expr2` as an `*ast.BinaryExpr` where `Op` is `token.OR`, as would be done for a value expression involving bitwise-or.
## Appendix: Considerations for API changes to go/ast
This section discusses what makes a change to `go/ast` break compatibility, what impact changes can have on users beyond pure compatibility, and what type of information is available to the parser at the time we choose a representation for syntax.
As described in the go1 [compatibility promise], it is not enough for standard library packages to simply make no breaking API changes: valid programs must continue to both compile *and* run. Or put differently: the API of a library is both the structure and runtime behavior of its exported API.
This matters because the definition of a 'valid program' using `go/ast` is arguably a gray area. In `go/ast`, there is no separation between the interface to AST nodes and the data they contain: the node set consists entirely of pointers to structs where every field is exported. Is it a valid use of `go/ast` to assume that every field is exported (e.g. walk nodes using reflection)? Is it valid to assume that the set of nodes is complete (e.g. by panicking in the default clause of a type switch)? Which fields may be assumed to be non-nil?
For the purpose of this document, I propose the following heuristic:
> A breaking change to `go/ast` (or go/parser) is any change that modifies (1)
> the parsed representation of existing, valid Go code, or (2) the per-node
> _invariants_ that are preserved in the representation of _invalid_ Go code.
> We consider all documented invariants plus any additional invariants that are
> assumed in significant amounts of code.
Of these two clauses, (1) is straightforward and hopefully uncontroversial: code that is valid in Go 1.17 must parse to an equivalent AST in Go 1.18. (2) is more subtle: there is no guarantee that the syntax tree of invalid code will not change. After all, use of type parameters is invalid in go1.17. Rather, the only guarantee is that _if a property of existing fields holds for a node type N in all representations of code, valid or invalid, it should continue to hold_. For example, `ast.Walk` assumes that `ast.IndexExpr.Index` is never nil. This must be preserved if we use `IndexExpr` to represent type instantiation, even for invalid instantiation expressions such as `var l List[]`.
The rationale for this heuristic is pragmatic: there is too much code in the wild that makes assumptions about nodes in AST representations; that code should not break.
Notable edge cases:
- It makes sense to preserve the property that all fields on Nodes are exported. `cmd/gofmt` makes this assumption, and it is reasonable to assume that other users will have made this assumption as well (and this was the original intent).
- There is code in the wild that assumes the completeness of node sets, i.e. panicking if an unknown node is encountered. For example, see issue [vscode-go#1551](https://github.com/golang/vscode-go/issues/1551) for x/tools. If we were to consider this a valid use of `go/ast`, that would mean that we could never introduce a new node type. In order to avoid introducing new nodes, we'd have to pack new syntactic constructs into existing nodes, resulting in cumbersome APIs and increased memory usage. Also, from another perspective, assuming the completeness of node types is not so different from assuming the completeness of fields in struct literals, which is explicitly not guaranteed by the [compatibility promise]. We should therefore consider adding a new node type a valid change (and do our best to publicize this change to our users).
Finally, when selecting our representation, keep in mind that the parser has access to only local syntactic information. Therefore, it cannot differentiate between, for example, the representation of `f[T]` in `var f []func(); T := 0; f[T]()` and `func f[S any](){} ... f[T]()`.
[expression list]: https://golang.org/ref/spec#ExpressionList
[type parameters proposal]: https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md
[parameter list]: https://golang.org/ref/spec#ParameterList
[compatibility promise]: https://golang.org/doc/go1compat
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/29934-error-values.md | # Proposal: Go 2 Error Inspection
Jonathan Amsterdam\
Russ Cox\
Marcel van Lohuizen\
Damien Neil
Last updated: January 25, 2019
Discussion at: https://golang.org/issue/29934
Past discussion at:
- https://golang.org/design/go2draft-error-inspection
- https://golang.org/design/go2draft-error-printing
- https://golang.org/wiki/Go2ErrorValuesFeedback
## Abstract
We propose several additions and changes to the standard library’s `errors` and
`fmt` packages, with the goal of making errors more informative for both
programs and people. We codify the common practice of wrapping one error in
another, and provide two convenience functions, `Is` and `As`, for traversing
the chain of wrapped errors.
We enrich error formatting by making it easy for error types to display
additional information when detailed output is requested with the `%+v`
formatting directive.
We add function, file and line information to the errors returned by
`errors.New` and `fmt.Errorf`, and provide a `Frame` type to simplify adding
location information to any error type.
We add support for detail formatting and wrapping to `fmt.Errorf`.
## Background
We provided background and a rationale in our [draft designs for error
inspection](https://go.googlesource.com/proposal/+/master/design/go2draft-error-inspection.md)
and
[printing](https://go.googlesource.com/proposal/+/master/design/go2draft-error-printing.md).
Here we provide a brief summary.
While Go 1’s definition of errors is open-ended, its actual support for errors
is minimal, providing only string messages. Many Go programmers want to provide
additional information with errors, and of course nothing has stopped them from
doing so. But one pattern has become so pervasive that we feel it is worth
enshrining in the standard library: the idea of wrapping one error in another
that provides additional information. Several packages provide wrapping support,
including the popular
[github.com/pkg/errors](https://godoc.org/github.com/pkg/errors).
Others have pointed out that indiscriminate wrapping can expose implementation
details, introducing undesired coupling between packages. As an example, the
[`errgo`](https://godoc.org/gopkg.in/errgo.v2) package lets users control
wrapping to hide details.
Another popular request is for location information in the form of stack frames.
Some advocate for complete stack traces, while others prefer to add location
information only at certain points.
## Proposal
We add a standard way to wrap errors to the standard library, to encourage the
practice and to make it easy to use. We separate error wrapping, designed for
programs, from error formatting, designed for people. This makes it possible to
hide implementation details from programs while displaying them for diagnosis.
We also add location (stack frame) information to standard errors and make it
easy for developers to include location information in their own errors.
All of the API changes are in the `errors` package. We also change the behavior
of parts of the `fmt` package.
### Wrapping
An error that wraps another error should implement `Wrapper` by defining an `Unwrap` method.
```
type Wrapper interface {
// Unwrap returns the next error in the error chain.
// If there is no next error, Unwrap returns nil.
Unwrap() error
}
```
The `Unwrap` function is a convenience for calling the `Unwrap` method if one exists.
```
// Unwrap returns the result of calling the Unwrap method on err, if err implements Unwrap.
// Otherwise, Unwrap returns nil.
func Unwrap(err error) error
```
The `Is` function follows the chain of errors by calling `Unwrap`, searching for
one that matches a target. It is intended to be used instead of equality for
matching sentinel errors (unique error values). An error type can implement an
`Is` method to override the default behavior.
```
// Is reports whether any error in err's chain matches target.
//
// An error is considered to match a target if it is equal to that target or if
// it implements a method Is(error) bool such that Is(target) returns true.
func Is(err, target error) bool
```
The `As` function searches the wrapping chain for an error whose type matches
that of a target. An error type can implement `As` to override the default
behavior.
```
// As finds the first error in err's chain that matches the type to which target
// points, and if so, sets the target to its value and returns true. An error
// matches a type if it is assignable to the target type, or if it has a method
// As(interface{}) bool such that As(target) returns true. As will panic if target
// is not a non-nil pointer to a type which implements error or is of interface type.
//
// The As method should set the target to its value and return true if err
// matches the type to which target points.
func As(err error, target interface{}) bool
```
A [vet check](https://golang.org/cmd/vet) will be implemented to check that
the `target` argument is valid.
The `Opaque` function hides a wrapped error from programmatic inspection.
```
// Opaque returns an error with the same error formatting as err
// but that does not match err and cannot be unwrapped.
func Opaque(err error) error
```
### Stack Frames
The `Frame` type holds location information: the function name, file and line of
a single stack frame.
```
type Frame struct {
// unexported fields
}
```
The `Caller` function returns a `Frame` at a given distance from the call site.
It is a convenience wrapper around `runtime.Callers`.
```
func Caller(skip int) Frame
```
To display itself, `Frame` implements a `Format` method that takes a `Printer`.
See [Formatting](#formatting) below for the definition of `Printer`.
```
// Format prints the stack as error detail.
// It should be called from an error's FormatError implementation,
// before printing any other error detail.
func (f Frame) Format(p Printer)
```
The errors returned from `errors.New` and `fmt.Errorf` include a `Frame` which
will be displayed when the error is formatted with additional detail (see
below).
### Formatting
We introduce two interfaces for error formatting into the `errors` package and
change the behavior of formatted output (the `Print`, `Println` and `Printf`
functions of the `fmt` package and their `S` and `F` variants) to recognize
them.
The `errors.Formatter` interface adds the `FormatError` method to the `error` interface.
```
type Formatter interface {
error
// FormatError prints the receiver's first error and returns the next error to
// be formatted, if any.
FormatError(p Printer) (next error)
}
```
An error type that wants to control its formatted output should implement
`Formatter`. During formatted output, `FormatError` will be called if it is
implemented, in preference to both the `Error` and `Format` methods.
`FormatError` returns an error, which will also be output if it is not `nil`. If
an error type implements `Wrapper`, then it would likely return the result of
`Unwrap` from `FormatError`, but it is not required to do so. An error that does
not implement `Wrapper` may still return a non-nil value from `FormatError`,
hiding implementation detail from programs while still displaying it to users.
The `Printer` passed to `FormatError` provides `Print` and `Printf` methods to
generate output, as well as a `Detail` method that reports whether the printing
is happening in "detail mode" (triggered by `%+v`). Implementations should first
call `Printer.Detail`, and if it returns true should then print detailed
information like the location of the error.
```
type Printer interface {
// Print appends args to the message output.
Print(args ...interface{})
// Printf writes a formatted string.
Printf(format string, args ...interface{})
// Detail reports whether error detail is requested.
// After the first call to Detail, all text written to the Printer
// is formatted as additional detail, or ignored when
// detail has not been requested.
// If Detail returns false, the caller can avoid printing the detail at all.
Detail() bool
}
```
When not in detail mode (`%v`, or in `Print` and `Println` functions and their
variants), errors print on a single line. In detail mode, errors print over
multiple lines, as shown here:
```
write users database:
more detail here
mypkg/db.Open
/path/to/database.go:111
- call myserver.Method:
google.golang.org/grpc.Invoke
/path/to/grpc.go:222
- dial myserver:3333:
net.Dial
/path/to/net/dial.go:333
- open /etc/resolv.conf:
os.Open
/path/to/os/open.go:444
- permission denied
```
### Changes to `fmt.Errorf`
We modify the behavior of `fmt.Errorf` in the following case: if the last
argument is an error `err` and the format string ends with `: %s`, `: %v`, or
`: %w`, then the returned error will implement `FormatError` to return `err`. In
the case of the new verb `%w`, the returned error will also implement
`errors.Wrapper` with an `Unwrap` method returning `err`.
### Changes to the `os` package
The `os` package contains a several predicate functions which test
an error against a condition: `IsExist`, `IsNotExist`, `IsPermission`,
and `IsTimeout`. For each of these conditions, we modify the `os`
package so that `errors.Is(err, os.ErrX)` returns true when
`os.IsX(err)` is true for any error in `err`'s chain. The `os` package
already contains `ErrExist`, `ErrIsNotExist`, and `ErrPermission`
sentinel values; we will add `ErrTimeout`.
### Transition
If we add this functionality to the standard library in Go 1.13, code that needs
to keep building with previous versions of Go will not be able to depend on the
new standard library. While every such package could use build tags and multiple
source files, that seems like too much work for a smooth transition.
To help the transition, we will publish a new package
[golang.org/x/xerrors](https://godoc.org/golang.org/x/xerrors), which will work
with both Go 1.13 and earlier versions and will provide the following:
- The `Wrapper`, `Frame`, `Formatter` and `Printer` types described above.
- The `Unwrap`, `Is`, `As`, `Opaque` and `Caller` functions described above.
- A `New` function that is a drop-in replacement for `errors.New`, but returns
an error that behaves as described above.
- An `Errorf` function that is a drop-in replacement for `fmt.Errorf`, except
that it behaves as described above.
- A `FormatError` function that adapts the `Format` method to use the new
formatting implementation. An error implementation can make sure earlier Go
versions call its `FormatError` method by adding this `Format` method:
```
type MyError ...
func (m *MyError) Format(f fmt.State, c rune) { // implements fmt.Formatter
xerrors.FormatError(m, f, c) // will call m.FormatError
}
func (m *MyError) Error() string { ... }
func (m *MyError) FormatError(p xerrors.Printer) error { ... }
```
## Rationale
We provided a rationale for most of these changes in the draft designs (linked
above). Here we justify the parts of the design that have changed or been added
since those documents were written.
- The original draft design proposed that the `As` function use generics, and
suggested an `AsValue` function as a temporary alternative until generics were
available. We find that the `As` function in the form we describe here is just
as concise and readable as a generic version would be, if not more so.
- We added the ability for error types to modify the default behavior of the
`Is` and `As` functions by implementing `Is` and `As` methods, respectively.
We felt that the extra power these give to error implementers was worth the
slight additional complexity.
- We included a `Frame` in the errors returned by `errors.New` and `fmt.Errorf`
so that existing Go programs could reap the benefits of location information.
We benchmarked the slowdown from fetching stack information and felt that it
was tolerable.
- We changed the behavior of `fmt.Errorf` for the same reason: so existing Go
programs could enjoy the new formatting behavior without modification. We
decided against wrapping errors passed to `fmt.Errorf` by default, since doing
so would effectively change the exposed surface of a package by revealing the
types of the wrapped errors. Instead, we require that programmers opt in to
wrapping by using the new formatting verb `%w`.
- Lastly, we want to acknowledge the several comments on the [feedback
wiki](https://golang.org/wiki/Go2ErrorValuesFeedback) that suggested that we
go further by incorporating a way to represent multiple errors as a single
error value. We understand that this is a popular request, but at this point
we feel we have introduced enough new features for one proposal, and we’d like
to see how these work out before adding more. We can always add a multi-error
type in a later proposal, and meanwhile it remains easy to write your own.
## Compatibility
None of the proposed changes violates the [Go 1 compatibility
guidelines](https://golang.org/doc/go1compat). Gathering frame information may
slow down `errors.New` slightly, but this is unlikely to affect practical
programs. Errors constructed with `errors.New` and `fmt.Errorf` will display
differently with `%+v`.
## Implementation
The implementation requires changes to the standard library.
The [golang.org/x/exp/errors](https://godoc.org/golang.org/x/exp/errors) package
contains a proposed implementation by Marcel van Lohuizen. We intend to make the
changes to the main tree at the start of the Go 1.13 cycle, around February 1.
As noted in our blog post ["Go 2, here we
come!"](https://blog.golang.org/go2-here-we-come), the development cycle will
serve as a way to collect experience about these new features and feedback from
(very) early adopters.
As noted above, the
[golang.org/x/xerrors](https://godoc.org/golang.org/x/xerrors) package, also by
Marcel, will provide code that can be used with earlier Go versions.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/26160-dns-based-vanity-imports.md | # Proposal: DNS Based Vanity Imports
Author(s): Sam Whited <sam@samwhited.com>
Last updated: 2018-06-30
Discussion at https://golang.org/issue/26160
## Abstract
A new mechanism for performing vanity imports using DNS TXT records.
## Background
Vanity imports allow the servers behind Go import paths to delegate hosting of
a packages source code to another host.
This is done using the HTTP protocol over TLS which means that expired
certificates, problems with bespoke servers, timeouts contacting the server, and
any number of other problems can cause looking up the source to fail.
Running an HTTP server also adds unnecessary overhead and expense that may be
difficult for hobbyists that create popular packages.
To avoid these problems, a new mechanism for looking up vanity imports is
needed.
## Proposal
To create a vanity import using DNS a separate TXT record is created for each
package with the name `go-import.example.net` where `example.net` is the domain
from the package import path.
The record data is the same format that would appear in an HTTP based vanity
imports "content" attribute.
This allows us to easily list all packages with vanity imports under a given
apex domain:
$ dig +short go-import.golang.org TXT
"golang.org/x/vgo git https://go.googlesource.com/vgo"
"golang.org/x/text git https://go.googlesource.com/text"
…
Because the current system for vanity import paths requires TLS unless the
`-insecure` flag is provided to `go get`, it is desirable to provide similar
security guarantees with DNS.
To this end `go get` should only accept TXT records with a verified DNSSEC
signature unless the `-insecure` flag has been passed.
To determine which package to import the Go tool would search each TXT record
returned for one that starts with the same fully qualified import path that
triggered the lookup.
TXT records for a given domain should be fetched only once when the first
package with a given domain in its import path is found and reused when parsing
other import lines in the same build.
## Rationale
Before we can make an HTTP request (as the current vanity import mechanism
does), or even establish a TLS connection, we must already have performed a DNS
lookup.
Because this happens anyways, it would be ideal to cut out other steps (HTTP,
TLS, etc.) altogether (and the extra problems they bring with them) and store
the information in the DNS record.
Even if vanity imports are deprecated in the near future for ZIP based package
servers ala vgo, backwards compatibility will be needed for some time and any
experience gained here may apply to pointing domains at package servers
(eg. via DNS SRV).
TXT records were chosen instead of a [custom resource record] to simplify
deployment and avoid the overhead of dealing with the IETF.
Because TXT records are limited to 255 characters but the apex domain used by a
package may be significantly longer than this, it is possible that some packages
may not fit in the record.
Since fully qualified package names must be typed in import statements this does
not seem practical or a cause for concern, so it is not addressed here.
[custom resource record]: https://tools.ietf.org/html/rfc6195
## Compatibility
If no TXT records are found for a given domain `go get` should fall back to
using the HTTP-based mechanism.
Having DNS TXT record lookup also lays the groundwork for discovering package
servers in a vgo-based future.
## Implementation
The author of this proposal has started looking into implementing it in the Go
tool, but cannot yet commit to a timeframe for an implementation.
## Open issues
- Does a DNSSEC implementation exist that could be vendored into the Go tool?
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/16085-conversions-ignore-tags.md | # Proposal: Ignore tags in struct type conversions
Author: [Robert Griesemer](gri@golang.org)
Created: June 16, 2016
Last updated: June 16, 2016
Discussion at [issue 16085](https://golang.org/issue/16085)
## Abstract
This document proposes to relax struct conversions such that struct tags are
ignored.
An alternative to the proposal is to add a new function reflect.StructCopy
that could be used instead.
## Background
The [spec](https://codereview.appspot.com/1698043) and corresponding
[implementation change](https://golang.org/cl/1667048) submitted almost
exactly six years ago made [struct tags](https://golang.org/ref/spec#Struct_types)
an integral part of a struct type by including them in the definition of struct
[type identity](https://golang.org/ref/spec#Type_identity) and indirectly in
struct [type conversions](https://golang.org/ref/spec#Conversions).
In retrospect, this change may have been overly restrictive with respect to
its impact on struct conversions, given the way struct tag use has evolved
over the years.
A common scenario is the conversion of struct data coming from, say a database,
to an _equivalent_ (identical but for its tags) struct that can be JSON-encoded,
with the JSON encoding defined by the respective struct tags.
For an example of such a type, see
https://github.com/golang/text/blob/master/unicode/cldr/xml.go#L6.
The way struct conversions are defined, it is not currently possible to convert
a value from one struct type to an equivalent one.
Instead, every field must be copied manually, which leads to more source text,
and less readable and possibly less efficient code.
The code must also be adjusted every time the involved struct types change.
[Issue 6858](https://github.com/golang/go/issues/6858) discusses this in more detail.
rsc@golang and r@golang suggest that we might be able to relax the rules for
structs such that struct tags are ignored for conversions, but not for struct
identity.
## Proposal
The spec states a set of rules for conversions.
The following rules apply to conversions of struct values (among others):
A non-constant value x can be converted to type T if:
- x's type and T have identical underlying types
- x's type and T are unnamed pointer types and their pointer base types have identical underlying types
The proposal is to change these two rules to:
A non-constant value x can be converted to type T if:
- x's type and T have identical underlying types _if struct tags are ignored (recursively)_
- x's type and T are unnamed pointer types and their pointer base types have identical underlying types _if struct tags are ignored (recursively)_
Additionally, package reflect is adjusted (Type.ConvertibleTo, Value.Convert)
to match this language change.
In other words, type identity of structs remains unchanged, but for the purpose
of struct conversions, type identity is relaxed such that struct tags are
ignored.
## Compatibility and impact
This is is a backward-compatible language change since it loosens an existing
restriction:
Any existing code will continue to compile with the same meaning (*), and some
code that currently is invalid will become valid.
Programs that manually copy all fields from one struct to another struct with
identical type but for the (type name and) tags, will be able to use a single
struct conversion instead.
More importantly, with this change two different (type) views of the same
struct value become possible via pointers of different types.
For instance, given:
type jsonPerson struct {
name `json:"name"`
}
type xmlPerson struct {
name `xml:"name"`
}
we will be able to access a value of *jsonPerson type
person := new(jsonPerson)
// some code that populates person
as an *xmlPerson:
alias := (*xmlPerson)(person)
// some code that uses alias
This may eliminate the need to copy struct values just to change the tags.
Type identity and conversion tests are also available programmatically, via
the reflect package.
The operations of Type.ConvertibleTo and Value.Convert will be relaxed for
structs with different (or absent) tags:
Type.ConvertibleTo will return true for some arguments where it currently
returns false.
This may change the behavior of programs depending on this method.
Value.Convert will convert struct values for which the operation panicked
before.
This will only affect programs that relied on (recovered from) that panic.
(*) r@golang points out that a program that is using tags to prevent
(accidental or deliberate) struct conversion would lose that mechanism.
Interestingly, package reflect appears to make such use (see type rtype),
but iant@golang points out that one could obtain the same effect by adding
differently typed zero-sized fields to the respective structs.
## Discussion
From a language spec point of view, changing struct type identity (rather
than struct conversions only) superficially looks like a simpler, cleaner,
and more consistent change: For one, it simplifies the spec, while only
changing struct conversions requires adding an additional rule.
iant@golang points out (https://github.com/golang/go/issues/11661) that
leaving struct identity in place doesn’t make much difference in practice:
It is already impossible to assign or implicitly convert between two
differently named struct types.
Unnamed structs are rare, and if accidental conversion is an issue, one can
always introduce a named struct.
On the other hand, runtime type descriptors (used by reflect.Type, interfaces,
etc) are canonical, so identical types have the same type descriptor.
The descriptor provides struct field tags, so identical types must have
identical tags.
Thus we cannot at this stage separate struct field tags from the notion of
type identity.
To summarize: Relaxing struct conversions only but leaving struct type
identity unchanged is sufficient to enable one kind of data conversion
that is currently overly tedious, and it doesn’t require larger and more
fundamental changes to the run time.
The change may cause a hopefully very small set of programs, which depend
on package reflect’s conversion-related API, to behave differently.
## Open question
Should tags be ignored at the top-level of a struct only, or recursively
all the way down?
For instance, given:
```
type T1 struct {
x int
p *struct {
name string `foo`
}
}
type T2 struct {
x int
p *struct {
name string `bar`
}
}
var t1 T1
```
Should the conversion T2(t1) be legal? If tags are only ignored for the
fields of T1 and T2, conversion is not permitted since the tags attached
to the type of the p field are different.
Alternatively, if tags are ignored recursively, conversion is permitted.
On the other hand, if the types were defined as:
```
type T1 struct {
x int
p *P1
}
type T2 struct {
x int
p *P2
}
```
where P1 and P2 are identical structs but for their tags, the conversion
would not be permitted either way since the p fields have different types
and thus T1 and T2 have different underlying types.
The proposal suggests to ignore tags recursively, “all the way down”.
This seems to be the more sensible approach given the stated goal, which
is to make it easier to convert from one struct type to another, equivalent
type with different tags.
For an example where this matters, see https://play.golang.org/p/U73K50YXYk.
Furthermore, it is always possible to prevent unwanted conversions by
introducing named types, but it would not be possible to enable those
conversions otherwise.
On the other hand, the current implementation of reflect.Value.Convert
will make recursive ignoring of struct tags more complicated and expensive.
crawshaw@golang points out that one could easily use a cache inside the
reflect package if necessary for performance.
## Implementation
An (almost) complete implementation is in https://golang.org/cl/24190/;
with a few pieces missing for the reflect package change.
## Alternatives to the language change
Even a backward-compatible language change needs to meet a high bar before
it can be considered.
It is not yet clear that this proposal satisfies that criteria.
One alternative is to do nothing.
That has the advantage of not breaking anything and also doesn’t require
any implementation effort on the language/library side.
But it means that in some cases structs have to be explicitly converted
through field-by-field assignment.
Another alternative that actually addresses the problem is to provide a
library function.
For instance, package reflect could provide a new function
```
func CopyStruct(dst, src Value, mode Mode)
```
which could be used to copy struct values that have identical types but
for struct tags.
A mode argument might be used to control deep or shallow copy, and perhaps
other modalities.
A deep copy (following pointers) would be a useful feature that the spec
change by itself does not enable.
The cost of using a CopyStruct function instead of a direct struct conversion
is the need to create two reflect.Values, invoking CopyStruct, and (inside
CopyStruct) the cost to verify type identity but for tags.
Copying the actual data needs to be done both in CopyStruct but also with a
direct (language-based) conversion.
The type verification is likely the most expensive step but identity of
struct types (with tags ignored) could be cached.
On the other hand, adonovan@golang points out that the added cost may not
matter in significant ways since these kinds of struct copies often sit
between a database request and an HTTP response.
The functional difference between the proposed spec change and a new
reflect.CopyStruct function is that with CopyStruct an actual copy has to
take place (as is the case now).
The spec change on the other hand permits both approaches: a (copying)
conversion of struct values, or pointers to different struct types that
point to the same struct value via a pointer conversion.
The latter may eliminate a copy of data in the first place.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/35112-scaling-the-page-allocator.md | # Proposal: Scaling the Go page allocator
Author(s): Michael Knyszek, Austin Clements
Last updated: 2019-10-18
## Abstract
The Go runtime's page allocator (i.e. `(*mheap).alloc`) has scalability
problems.
In applications with a high rate of heap allocation and a high GOMAXPROCS, small
regressions in the allocator can quickly become big problems.
Based on ideas from Austin about making P-specific land-grabs to reduce lock
contention, and with evidence that most span allocations are one page in size
and are for small objects (<=32 KiB in size), I propose we:
1. Remove the concept of spans for free memory and track free memory with a
bitmap.
1. Allow a P to cache free pages for uncontended allocation.
Point (1) simplifies the allocator, reduces some constant overheads, and more
importantly enables (2), which tackles lock contention directly.
## Background
The Go runtime's page allocator (i.e. `(*mheap).alloc`) has serious scalability
issues.
These were discovered when working through
[golang/go#28479](https://github.com/golang/go/issues/28479) and
[kubernetes/kubernetes#75833](https://github.com/kubernetes/kubernetes/issues/75833#issuecomment-477758829)
which were both filed during or after the Go 1.12 release.
The common thread between each of these scenarios is a high rate of allocation
and a high level of parallelism (in the Go world, a relatively high GOMAXPROCS
value, such as 32).
As it turned out, adding some extra work for a small subset of allocations in Go
1.12 and removing a fast-path data structure in the page heap caused significant
regressions in both throughput and tail latency.
The fundamental issue is the heap lock: all operations in the page heap
(`mheap`) are protected by the heap lock (`mheap.lock`).
A high allocation rate combined with a high degree of parallelism leads to
significant contention on this lock, even though page heap allocations are
relatively cheap and infrequent.
For instance, if the most popular allocation size is ~1 KiB, as seen with the
Kubernetes scalability test, then the runtime accesses the page heap every 10th
allocation or so.
The proof that this is really a scalability issue in the design and not an
implementation bug in Go 1.12 is that we were seeing barging behavior on this
lock in Go 1.11, which indicates that the heap lock was already in a collapsed
state before the regressions in Go 1.12 were even introduced.
## Proposal
I believe we can significantly improve the scalability of the page allocator if
we eliminate as much lock contention in the heap as possible.
We can achieve this in two ways:
1. Make the allocator faster.
The less time spent with the lock held the better the assumptions of e.g. a
futex hold up.
1. Come up with a design that avoids grabbing the lock at all in the common
case.
So, what is this common case? We currently have span allocation data for a
couple large Go applications which reveal that an incredibly high fraction of
allocations are for small object spans.
First, we have data from Kubernetes' 12-hour load test, which indicates that
99.89% of all span allocations are for small object spans, with 93% being from
the first 50 size classes, inclusive.
Next, data from a large Google internal service shows that 95% of its span
allocations are for small object spans, even though this application is known to
make very large allocations relatively frequently.
94% of all of this application's span allocations are from the first 50 size
classes, inclusive.
Thus, I propose we:
* Track free pages in a bitmap that spans the heap's address space.
* Allow a P to cache a set of bits from the bitmap.
The goal is to have most (80%+) small object span allocations allocate quickly,
and without a lock.
(1) makes the allocator significantly more cache-friendly, predictable, and
enables (2), which helps us avoid grabbing the lock in the common case and
allows us to allocate a small number of pages very quickly.
Note that this proposal maintains the current first-fit allocation policy and
highest-address-first scavenging policy.
### Tracking free memory with bitmaps
With a first-fit policy, allocation of one page (the common case) amounts to
finding the first free page in the heap.
One promising idea here is to use a bitmap because modern microarchitectures are
really good at iterating over bits.
Each bit in the bitmap represents a single runtime page (8 KiB as of this
writing), where 1 means in-use and 0 means free.
"In-use" in the context of the new page allocator is now synonymous with "owned
by a span".
The concept of a free span isn't useful here.
I propose that the bitmap be divided up into shards (called chunks) which are
small enough to be quick to iterate over.
512-bit shards would each represent 4 MiB (8 KiB pages) and fit in roughly 2
cache lines.
These shards could live in one global bitmap which is mapped in as needed, or to
reduce the virtual memory footprint, we could use a sparse-array style structure
like `(*mheap).arenas`.
Picking a chunk size which is independent of arena size simplifies the
implementation because arena sizes are platform-dependent.
Simply iterating over the whole bitmap to find a free page is still fairly
inefficient, especially for dense heaps.
We want to be able to quickly skip over completely in-use sections of the heap.
Thus, I propose we attach summary information to each chunk such that it's much
faster to filter out chunks which couldn't possibly satisfy the allocation.
What should this summary information contain? I propose we augment each chunk
with three fields: `start, max, end uintptr`.
`start` represents the number of contiguous 0 bits at the start of this bitmap
shard.
Similarly, `end` represents the number of contiguous 0 bits at the end of the
bitmap shard.
Finally, `max` represents the largest contiguous section of 0 bits in the bitmap
shard.
The diagram below illustrates an example summary for a bitmap chunk.
The arrow indicates which direction addresses go (lower to higher).
The bitmap contains 3 zero bits at its lowest edge and 7 zero bits at its
highest edge.
Within the summary, there are 10 contiguous zero bits, which `max` reflects.
![Diagram of a summary.](35112/summary-diagram.png)
With these three fields, we can determine whether we'll be able to find a
sufficiently large contiguous free section of memory in a given arena or
contiguous set of arenas with a simple state machine.
Computing this summary information for an arena is less trivial to make fast,
and effectively amounts to a combination of a table to get per-byte summaries
and a state machine to merge them until we have a summary which represents the
whole chunk.
The state machine for `start` and `end` is mostly trivial.
`max` is only a little more complex: by knowing `start`, `max`, and `end` for
adjacent summaries, we can merge the summaries by picking the maximum of each
summary's `max` value and the sum of their `start` and `end` values.
I propose we update these summary values eagerly as spans are allocated and
freed.
For large allocations that span multiple arenas, we can zero out summary
information very quickly, and we really only need to do the full computation of
summary information for the ends of the allocation.
There's a problem in this design so far wherein subsequent allocations may end
up treading the same path over and over.
Unfortunately, this retreading behavior's time complexity is `O(heap * allocs)`.
We propose a simple solution to this problem: maintain a hint address.
A hint address represents an address before which there are definitely no free
pages in the heap.
There may not be free pages for some distance after it, hence why it is just a
hint, but we know for a fact we can prune from the search everything before that
address.
In the steady-state, as we allocate from the lower addresses in the heap, we can
bump the hint forward with every search, effectively eliminating the search
space until new memory is freed.
Most allocations are expected to allocate not far from the hint.
There's still an inherent performance problem with this design: larger
allocations may require iterating over the whole heap, even with the hint
address.
This issue arises from the fact that we now have an allocation algorithm with a
time complexity linear in the size of the heap.
Modern microarchitectures are good, but not quite good enough to just go with
this.
Therefore, I propose we take this notion of a summary-per-chunk and extend it:
we can build a tree around this, wherein a given entry at some level of the
radix tree represents the merge of some number of summaries in the next level.
The leaf level in this case contains the per-chunk summaries, while each entry
in the previous levels may reflect 8 chunks, and so on.
This tree would be constructed from a finite number of arrays of summaries, with
lower layers being smaller in size than following layers, since each entry
reflects a larger portion of the address space.
More specifically, we avoid having an "explicit" pointer-based structure (think
"implicit" vs. "explicit" when it comes to min-heap structures: the former tends
to be an array, while the latter tends to be pointer-based).
Below is a diagram of the complete proposed structure.
![Diagram of the proposed radix tree.](35112/radix-tree-diagram.png)
The bottom two boxes are the arenas and summaries representing the full address
space.
Each red line represents a summary, and each set of dotted lines from a summary
into the next layer reflects which part of that next layer that summary refers
to.
In essence, because this tree reflects our address space, it is in fact a radix
tree over addresses.
By left shifting a memory address by different amounts, we can find the exact
summary which contains that address in each level.
On allocation, this tree may be searched by looking at `start`, `max`, and `end`
at each level: if we see that `max` is large enough, we continue searching in
the next, more granular, level.
If `max` is too small, then we look to see if there's free space spanning two
adjacent summaries' memory regions by looking at the first's `end` value and the
second's `start` value.
Larger allocations are therefore more likely to cross larger boundaries of the
address space are more likely to get satisfied by levels in the tree which are
closer to the root.
Note that if the heap has been exhausted, then we will simply iterate over the
root level, find all zeros, and return.
#### Implementation details
A number of details were omitted from the previous section for brevity, but
these details are key for an efficient implementation.
Firstly, note that `start, max, end uintptr` is an awkward structure in size,
requiring either 12 bytes or 24 bytes to store naively, neither of which fits a
small multiple of a cache line comfortably.
To make this structure more cache-friendly, we can pack them tightly into
64-bits if we constrain the height of the radix tree.
The packing is straight-forward: we may dedicate 21 bits to each of these three
numbers and pack them into 63 bits.
A small quirk with this scheme is that each of `start`, `max`, and `end` are
counts, and so we need to represent zero as well as the maximum value (`2^21`),
which at first glance requires an extra bit per field.
Luckily, in that case (i.e. when `max == 2^21`), then `start == max && max ==
end`.
We may use the last remaining bit to represent this case.
A summary representing a completely full region is also conveniently `uint64(0)`
in this representation, which enables us to very quickly skip over parts of the
address space we don't care about with just one load and branch.
As mentioned before, a consequence of this packing is that we need to place a
restriction on our structure: each entry in the root level of the radix tree may
only represent at most `2^21` 8 KiB pages, or 16 GiB, because we cannot
represent any more in a single summary.
From this constraint, it follows that the root level will always be
`2^(heapAddrBits - 21 - log2(pageSize))` in size in entries.
Should we need to support much larger heaps, we may easily remedy this by
representing a summary as `start, max, end uint32`, though at the cost of cache
line alignment and 1.5x metadata overhead.
We may also consider packing the three values into two `uint64` values, though
at the cost of twice as much metadata overhead.
Note that this concern is irrelevant on 32-bit architectures: we can easily
represent the whole address space with a tree and 21 bits per summary field.
Unfortunately, we cannot pack it more tightly on 32-bit architectures since at
least 14 bits are required per summary field.
Now that we've limited the size of the root level, we need to pick the sizes of
the subsequent levels.
Each entry in the root level must reflect some number of entries in the
following level, which gives us our fanout.
In order to stay cache-friendly, I propose trying to keep the fanout close to
the size of an L1 cache line or some multiple thereof.
64 bytes per line is generally a safe bet, and our summaries are 8 bytes wide,
so that gives us a fanout of 8.
Taking all this into account, for a 48-bit address space (such as how we treat
`linux/amd64` in the runtime), I propose the following 5-level array structure:
* Level 0: `16384` entries (fanout = 1, root)
* Level 1: `16384*8` entries (fanout = 8)
* Level 2: `16384*8*8` entries (fanout = 8)
* Level 3: `16384*8*8*8` entries (fanout = 8)
* Level 4: `16384*8*8*8*8` entries (fanout = 8, leaves)
Note that level 4 has `2^48 bytes / (512 * 8 KiB)` entries, which is exactly the
number of chunks in a 48-bit address space.
Each entry at this level represents a single chunk.
Similarly, since a chunk represents 512, or 2^9 pages, each entry in the root
level summarizes a region of `2^21` contiguous pages, as intended.
This scheme can be trivially applied to any system with a larger address space,
since we just increase the size of the root level.
For a 64-bit address space, the root level can get up to 8 GiB in size, but
that's mostly virtual address space which is fairly cheap since we'll only
commit to what we use (see below).
For most heaps, `2^21` contiguous pages or 16 GiB per entry in the root level is
good enough.
If we limited ourselves to 8 entries in the root, we would still be able to
gracefully support up to 128 GiB (and likely double that, thanks to
prefetchers).
Some Go applications may have larger heaps though, but as mentioned before we
can always change the structure of a summary away from packing into 64 bits and
then add an additional level to the tree, at the expense of some additional
metadata overhead.
Overall this uses between KiB and hundreds of MiB of address space on systems
with smaller address spaces (~600 MiB for a 48-bit address space, ~128 KiB for a
32-bit address space).
For a full 64-bit address space, this layout requires ~37 TiB of reserved
memory.
At first glance, this seems like an enormous amount, but in reality that's an
extremely small fraction (~0.00022%) of the full address space.
Furthermore, this address space is very cheap since we'll only commit what we
use, and to reduce the size of core dumps and eliminate issues with overcommit
we will map the space as `PROT_NONE` (only `MEM_RESERVE` on Windows) and map it
as read/write explicitly when we grow the heap (an infrequent operation).
There are only two known adverse effects of this large mapping on Linux:
1. `ulimit -v`, which restricts even `PROT_NONE` mappings.
1. Programs like top, when they report virtual memory footprint, include
`PROT_NONE` mappings.
In the grand scheme of things, these are relatively minor consequences.
The former is not used often, and in cases where it is, it's used as an
inaccurate proxy for limiting a process's physical memory use.
The latter is mostly cosmetic, though perhaps some monitoring system uses it as
a proxy for memory use, and will likely result in some harmless questions.
### Allow a P to cache bits from the bitmap
I propose adding a free page cache to each P.
The page cache, in essence, is a base address marking the beginning of a 64-page
aligned chunk, and a 64-bit bitmap representing free pages in that chunk.
With 8 KiB pages, this makes it so that at most each P can hold onto 512 KiB of
memory.
The allocation algorithm would thus consist of a P first checking its own cache.
If it's empty, it would then go into the bitmap and cache the first non-zero
chunk of 64 bits it sees, noting the base address of those 64 bits.
It then allocates out of its own cache if able.
If it's unable to satisfy the allocation from these bits, then it goes back and
starts searching for contiguous bits, falling back on heap growth if it fails.
If the allocation request is more than 16 pages in size, then we don't even
bother checking the cache.
The probability that `N` consecutive free pages will be available in the page
cache decreases exponentially as `N` approaches 64, and 16 strikes a good
balance between being opportunistic and being wasteful.
Note that allocating the first non-zero chunk of 64 bits is an equivalent
operation to allocating one page out of the heap: fundamentally we're looking
for the first free page we can find in both cases.
This means that we can and should optimize for this case, since we expect that
it will be extremely common.
Note also that we can always update the hint address in this case, making all
subsequent allocations (large and small) faster.
Finally, there's a little hiccup in doing this and that's that acquiring an
`mspan` object currently requires acquiring the heap lock, since these objects
are just taken out of a locked SLAB allocator.
This means that even if we can perform the allocation uncontended we still need
the heap lock to get one of these objects.
We can solve this problem by adding a pool of `mspan` objects to each P, similar
to the `sudog` cache.
### Scavenging
With the elimination of free spans, scavenging must work a little differently as
well.
The primary bit of information we're concerned with here is the `scavenged`
field currently on each span.
I propose we add a `scavenged` bitmap to each `heapArena` which mirrors the
allocation bitmap, and represents whether that page has been scavenged or not.
Allocating any pages would unconditionally clear these bits to avoid adding
extra work to the allocation path.
The scavenger's job is now conceptually much simpler.
It takes bits from both the allocation bitmap as well as the `scavenged` bitmap
and performs a bitwise-OR operation on the two to determine which pages are
"scavengable".
It then scavenges any contiguous free pages it finds in a single syscall,
marking the appropriate bits in the `scavenged` bitmap.
Like the allocator, it would have a hint address to avoid walking over the same
parts of the heap repeatedly.
Because this new algorithm effectively requires iterating over the heap
backwards, there's a slight concern with how much time it could take,
specifically if it does the scavenge operation with the heap lock held like
today.
Instead, I propose that the scavenger iterate over the heap without the lock,
checking the free and scavenged bitmaps optimistically.
If it finds what appears to be valid set of contiguous scavengable bits, it'll
acquire the heap lock, verify their validity, and scavenge.
We're still scavenging with the heap lock held as before, but scaling the
scavenger is outside the scope of this document (though we certainly have ideas
there).
#### Huge-page Awareness
Another piece of the scavenging puzzle is how to deal with the fact that the
current scavenging policy is huge-page aware.
There are two dimensions to this huge-page awareness: the runtime counts the
number of free and unscavenged huge pages for pacing purposes, and the runtime
scavenges those huge pages first.
For the first part, the scavenger currently uses an explicit ratio calculated
whenever the GC trigger is updated to determine the rate at which it should
scavenge, and it uses the number of free and unscavenged huge pages to determine
this ratio.
Instead, I propose that the scavenger releases memory one page at a time while
avoiding breaking huge pages, and it times how long releasing each page takes.
Given a 1% maximum time spent scavenging for the background scavenger, we may
then determine the amount of time to sleep, thus effectively letting the
scavenger set its own rate.
In some ways this self-pacing is more accurate because we no longer have to make
order-of-magnitude assumptions about how long it takes to scavenge.
Also, it represents a significant simplification of the scavenger from an
engineering perspective; there's much less state we need to keep around in
general.
The downside to this self-pacing idea is that we must measure time spent
sleeping and time spent scavenging, which may be funky in the face of OS-related
context switches and other external anomalies (e.g. someone puts their laptop in
sleep mode).
We can deal with such anomalies by setting bounds on how high or low our
measurements are allowed to go.
Furthermore, I propose we manage an EWMA which we feed into the time spent
sleeping to account for scheduling overheads and try to drive the actual time
spent scavenging to 1% of the time the goroutine is awake (the same pace as
before).
As far as scavenging huge pages first goes, I propose we just ignore this aspect
of the current scavenger simplicity's sake.
In the original scavenging proposal, the purpose of scavenging huge pages first
was for throughput: we would get the biggest bang for our buck as soon as
possible, so huge pages don't get "stuck" behind small pages.
There's a question as to whether this actually matters in practice: conventional
wisdom suggests a first-fit policy tends to cause large free fragments to
congregate at higher addresses.
By analyzing and simulating scavenging over samples of real Go heaps, I think
this wisdom mostly holds true.
The graphs below show a simulation of scavenging these heaps using both
policies, counting how much of the free heap is scavenged at each moment in
time.
Ignore the simulated time; the trend is more important.
![Graph of scavenge throughput.](35112/scavenge-throughput-graph.png)
With the exception of two applications, the rest all seem to have their free and
unscavenged huge pages at higher addresses, so the simpler policy leads to a
similar rate of releasing memory.
The simulation is based on heap snapshots at the end of program execution, so
it's a little non-representative since large, long-lived objects, or clusters of
objects, could have gotten freed just before measurement.
This misrepresentation actually acts in our favor, however, since it suggests an
even smaller frequency of huge pages appearing in the middle of the heap.
## Rationale
The purpose of this proposal is to help the memory allocator scale.
To reiterate: it's current very easy to put the heap lock in a collapsing state.
Every page-level allocation must acquire the heap lock, and with 1 KiB objects
we're already hitting that page on every 10th allocation.
To give you an idea of what kinds of timings are involved with page-level
allocations, I took a trace from a 12-hour load test from Kubernetes when I was
diagnosing
[kubernetes/kubernetes#75833](https://github.com/kubernetes/kubernetes/issues/75833#issuecomment-477758829).
92% of all span allocations were for the first 50 size classes (i.e. up and
including 8 KiB objects).
Each of those, on average, spent 4.0µs in the critical section with the heap
locked, minus any time spent scavenging.
The mode of this latency was between 3 and 4µs, with the runner-up being between
2 and 3µs.
These numbers were taken with the load test built using Go 1.12.4 and from a
`linux/amd64` GCE instance.
Note that these numbers do not include the time it takes to acquire or release
the heap lock; it is only the time in the critical section.
I implemented a prototype of this proposal which lives outside of the Go
runtime, and optimized it over the course of a few days.
I then took heap samples from large, end-to-end benchmarks to get realistic heap
layouts for benchmarking the prototype.
The prototype benchmark then started with these heap samples and allocated out
of them until the heap was exhausted.
Without the P cache, allocations took only about 680 ns on average on a similar
GCE instance to the Kubernetes case, pretty much regardless of heap size.
This number scaled gracefully relative to allocation size as well.
To be totally clear, this time includes finding space, marking the space and
updating summaries.
It does not include clearing scavenge bits.
With the P cache included, that number dropped to 20 ns on average.
The comparison with the P cache isn't an apples-to-apples comparison since it
should include heap lock/unlock time on the slow path (and the k8s numbers
should too).
However I believe this only strengthens our case: with the P cache, in theory,
the lock will be acquired less frequently, so an apples-to-apples comparison
would be even more favorable to the P cache.
All of this doesn't even include the cost savings when freeing memory.
While I do not have numbers regarding the cost of freeing, I do know that the
free case in the current implementation is a significant source of lock
contention ([golang/go#28479](https://github.com/golang/go/issues/28479)).
Each free currently requires a treap insertion and maybe one or two removals for
coalescing.
In comparison, freeing memory in this new allocator is faster than allocation
(without the cache): we know exactly which bits in the bitmap to clear from the
address, and can quickly index into the arenas array to update them as well as
their summaries.
While updating the summaries still takes time, we can do even better by freeing
many pages within the same arena at once, amortizing the cost of this update.
In fact, the fast page sweeper that Austin added in Go 1.12 already iterates
over the heap from lowest to highest address, freeing completely free spans.
It would be straight-forward to batch free operations within the same heap arena
to achieve this cost amortization.
In sum, this new page allocator design has the potential to not only solve our
immediate scalability problem, but also gives us more headroom for future
optimizations compared to the current treap-based allocator, for which a number
of various caching strategies, have been designed and/or attempted.
### Fragmentation Concerns
The biggest way fragmentation could worsen with this design is as a result of
the P cache.
The P cache makes it so that allocation isn't quite exactly a serialized
single-threaded first-fit, and P may hold onto pages which another P may need
more.
In practice, given an in-tree prototype, we've seen that this fragmentation
scales with the number of P's, and we believe this to be a reasonable trade-off:
more processors generally require more memory to take advantage of parallelism.
### Prior Art
As far as prior art is concerned, there hasn't been much work with bitmap
allocators for languages which have a GC.
Consider Go against other other managed languages with a GC: Go's GC sits in a
fairly unique point in the design space for GCs because it is a non-moving
collector.
Most allocators in other managed languages (e.g. Java) tend to be bump
allocators, since they tend to have moving and compacting GCs.
Other managed languages also tend to have many more allocations of smaller size
compared to Go, so the "slow path" of allocating pages is usually just grabbing
a fixed-size block out of a shared pool, which can be made to be quite fast.
Go relies on being able to allocate blocks of different sizes to reduce
fragmentation in its current allocator design.
However, when considering non-GC'd languages, e.g C/C++, there has been some
notable work using bitmaps in memory allocators.
In particular,
[DieHard](https://github.com/emeryberger/DieHard/blob/master/docs/pldi2006-diehard.pdf),
[DieHarder](https://www.usenix.org/legacy/event/woot/tech/final_files/Novark.pdf),
and [Mesh](https://arxiv.org/pdf/1902.04738.pdf).
DieHard and DieHarder in particular implement an effecient amortized `O(1)`
bitmap-based allocator, though that was not their primary contribution.
Mesh uses bitmaps for managing which slots are free in a span, like the Go
allocator.
We were not aware of DieHard(er) at the time of writing this proposal, though
they use bitmaps to track individual objects instead of pages.
There are also a few niche cases where they are used such as [GPU-accelerated
allocation](https://arxiv.org/abs/1810.11765) and [real-time
applications](http://www.gii.upv.es/tlsf/).
A good point of comparison for Go's current page allocator is TCMalloc, and in
many ways Go's memory allocator is based on TCMalloc.
However, there are some key differences that arise as a result of Go's GC.
Notably, TCMalloc manages its per-CPU caches as arrays of pointers, rather than
through spans directly like Go does.
The reason for this, as far as I can tell, is because when a free occurs in
TCMalloc, that object is immediately available for re-use, whereas with Go,
object lifetimes are effectively rounded up to a GC cycle.
As a result of this global, bulk (de)allocation behavior resulting in the lack
of short-term re-use, I suspect Go tends to ask the page allocator for memory
more often that TCMalloc does.
This bulk (de)allocation behavior would thus help explain why page allocator
scalability hasn't been such a big issue for TCMalloc (again, as far as I'm
aware).
In sum, Go sits in a unique point in the memory management design space.
The bitmap allocator fits this point in the design space well: bulk allocations
and frees can be grouped together to amortize the cost of updating the summaries
thanks to the GC.
Furthermore, since we don't move objects in the heap, we retain the flexibility
of dealing with fragments efficiently through the radix tree.
### Considered Alternatives
#### Cache spans in a P
One considered alternative is to keep the current span structure, and instead
try to cache the spans themselves on a P, splitting them on each allocation
without acquiring the heap lock.
While this seems like a good idea in principle, one big limitation is that you
can only cache contiguous regions of free memory.
Suppose many heap fragments tend to just be one page in size: one ends up having
to go back to the page allocator every single time anyway.
While it is true that one might only cache one page from the heap in the
proposed design, this case is fairly rare in practice, since it picks up any
available memory it can find in a given 64-page aligned region.
The proposed design also tends to have nicer properties: the treap structure
scales logarithmically (probabilistically) with respect to the number of free
heap fragments, but even this property doesn't scale too well to very large
heaps; one might have to chase down 20 pointers in a 20 GiB heap just for an
allocation, not to mention the additional removals required.
Small heaps may see allocations as fast as 100 ns, whereas large heaps may see
page allocation latencies of 4 µs or more on the same hardware.
On the other hand, the proposed design has a very consistent performance profile
since the radix tree is effectively perfectly balanced.
Furthermore, this idea of caching spans only helps the allocation case.
In most cases the source of contention is not only allocation but also freeing,
since we always have to do a treap insertion (and maybe one or two removals) on
the free path.
In this proposal, the free path is much more efficient in general (no complex
operations required, just clearing memory), even though it still requires
acquiring the heap lock.
Finally, caching spans doesn't really offer much headroom in terms of future
optimization, whereas switching to a bitmap allocator allows us to make a
variety of additional optimizations because the design space is mostly
unexplored.
## Compatibility
This proposal changes no public APIs in either syntax or semantics, and is
therefore Go 1 backwards compatible.
## Implementation
Michael Knyszek will implement this proposal.
The implementation will proceed as follows:
1. Change the scavenger to be self-paced to facilitate an easier transition.
1. Graft the prototype (without the P cache) into the runtime.
* The plan is to do this as a few large changes which are purely additive
and with tests.
* The two allocators will live side-by-side, and we'll flip between the two
in a single small change.
1. Delete unnecessary code from the old allocator.
1. Create a pool of `mspan` objects for each P.
1. Add a page cache to each P.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/generics-implementation-stenciling.md | # Generics implementation - Stenciling
This document describes a method to implement the [Go generics proposal](https://go.googlesource.com/proposal/+/refs/heads/master/design/go2draft-type-parameters.md). This method generates multiple implementations of a generic function by instantiating the body of that generic function for each set of types with which it is instantiated. By “implementation” here, I mean a blob of assembly code and associated descriptors (pcln tables, etc.). This proposal stands in opposition to the [Generics Implementation - Dictionaries](https://go.googlesource.com/proposal/+/refs/heads/master/design/generics-implementation-dictionaries.md) proposal, where we generate a single implementation of a generic function that handles all possible instantiated types. The [Generics Implementation - GC Shape Stenciling](https://go.googlesource.com/proposal/+/refs/heads/master/design/generics-implementation-gcshape.md) proposal is a hybrid of that proposal and this one.
Suppose we have the following generic function
```
func f[T1, T2 any](x int, y T1) T2 {
...
}
```
And we have two call sites of `f`
```
var a float64 = f[int, float64](7, 8.0)
var b struct{f int} = f[complex128, struct{f int}](3, 1+1i)
```
Then we generate two versions of `f`, compiling each one into its own implementation:
```
func f1(x int, y int) float64 {
... identical bodies ...
}
func f2(x int, y complex128) struct{f int} {
... identical bodies ...
}
```
This design doc walks through the details of how this compilation strategy would work.
## Naming
The binary will now have potentially multiple instances of a function in it. How will those functions be named? In the example above, just calling them `f1` and `f2` won’t work.
At least for the linker, we need names that will unambiguously indicate which implementation we’re talking about. This doc proposes to decorate the function name with each type parameter name, in brackets:
```
f[int, float64]
f[complex, struct{f int}]
```
The exact syntax doesn’t really matter, but it must be unambiguous. The type names will be formatted using cmd/compile/internal/types/Type.ShortString (as is used elsewhere, e.g. for type descriptors).
Should we show these names to users? For panic tracebacks, it is probably ok, since knowing the type parameters could be useful (as are regular parameters, which we also show). But what about CPU profiles? Should we unify profiles of differently-instantiated versions of the same function? Or keep them separate?
## Instantiation
Because we don’t know what types `f` will be instantiated with when we compile `f` itself (but see the section on type lists below), we can’t generate the implementations at the definition of `f`. We must generate the implementations at the callsites of `f`.
At each callsite of `f` where type parameters are provided, we must generate a new implementation of `f`. To generate an implementation, the compiler must have access to the body of `f`. To facilitate that, the object file must contain the body of any generic functions, so that the compiler can compile them again, with possibly different type parameters, during compilation of the calling package (this mechanism already exists for inlining, so there is maybe not much work to do here).
\
It isn’t obvious at some callsites what the concrete type parameters are. For instance, consider `g`:
```
func [T any] g(x T) float64 {
return f[T, float64](5, x)
}
```
The callsite of `f` in `g` doesn’t know what all of its type parameters to `f` are. We won’t be able to generate an implementation of `f` until the point where `g` is instantiated. So implementing a function at a callsite might require recursively implementing callees at callsites in its body. (TODO: where would the caller of `g` get the body of `f` from? Is that also in the object file somewhere?)
How do we generate implementations in cases of general recursion?
```
func r1[X, Y, Z any]() {
r2[X, Y, Z]()
}
func r2[X, Y, Z any]() {
r1[Y, Z, X]()
}
r1[int8, int16, int32]()
```
What implementations does this generate? I think the answer is clear, but we need to make sure our build system comes up with that answer without hanging the compiler in an infinite recursion. We’d need to record the existence of an instantiation (which we probably want to do anyway, to avoid generating `f[int, float64]` twice in one compilation unit) before generating code for that instantiation.
## Type lists
If a generic function has a type parameter that has a type constraint which contains a type list, then we could implement that function at its definition, with each element of that type list. Then we wouldn’t have to generate an implementation at each call site. This strategy is fragile, though. Type lists are understood as listing underlying types (under the generics proposal as of this writing), so the set of possible instantiating types is still infinite. But maybe we generate an instantiation for each unnamed type (and see the deduplication section for when it could be reused for a named type with the same underlying type).
## Deduplication
The compiler will be responsible for generating only one implementation for a given particular instantiation (function + concrete types used for instantiation). For instance, if you do:
```
f[int, float64](3, 5)
f[int, float64](4, 6)
```
If both of these calls are in the same package, the compiler can detect the duplication and generate the implementation `f[int, float64]` only once.
If the two calls to `f` are in different packages, though, then things aren’t so simple. The two compiler invocations will not know about each other. The linker will be responsible for deduplicating implementations resulting from instantiating the same function multiple times. In the example above, the linker will end up seeing two `f[int, float64]` symbols, each one generated by a different compiler invocation. The functions will be marked as DUPOK so the linker will be able to throw one of them away. (Note: due to the relaxed semantics of Go’s function equality, the deduplication is not required; it is just an optimization.)
Note that the build system already deals with deduplicating code. For example, the generated equality and hash functions are deduplicated for similar reasons.
## Risks
There are two main risks with this implementation, which are related.
1. This strategy requires more compile time, as we end up compiling the same instantiation multiple times in multiple packages.
2. This strategy requires more code space, as there will be one copy of `f` for each distinct set of type parameters it is instantiated with. This can lead to large binaries and possibly poor performance (icache misses, branch mispredictions, etc.).
For the first point, there are some possible mitigations. We could enlist the go tool to keep track of the implementations that a particular compiler run generated (recorded in the export data somewhere), and pass the name of those implementations along to subsequent compiler invocations. Those subsequent invocations could avoid generating that same implementation again. This mitigation wouldn’t work for compilations that were started in parallel from the go tool, however. Another option is to have the compiler report back to the go tool the implementations it needs. The go tool can then deduplicate that list and invoke the compiler again to actually generate code for that deduped list. This mitigation would add complexity, and possibly compile time, because we’d end up calling the compiler multiple times for each package. In any case, we can’t reduce the number of compilations beyond that needed for each unique instantiation, which still might be a lot. Which leads to point 2...
For the second point, we could try to deduplicate implementations which have different instantiated types, but different in ways that don’t matter to the generated code. For instance, if we have
```
type myInt int
f[int, bool](3, 4)
f[myInt, bool](5, 6)
```
Do we really need multiple implementations of `f`? It might be required, if for example `f` assigns its second argument to an `interface{}`-typed variable. But maybe `f` only depends on the underlying type of its second argument (adds values of that type together and then compares them, say), in which case the implementations could share code.
I suspect there will be lots of cases where sharing is possible, if the underlying types are indistinguishable w.r.t. the garbage collector (same size and ptr/nonptr layout). We’d need to detect the tricky cases somehow, maybe using summary information about what properties of each generic parameter a function uses (including things it calls with those same parameters, which makes it tricky when recursion is involved).
If we deduplicate in this fashion, it complicates naming. How do we name the two implementations of `f` shown above? (They would be named `f[int, bool]` and `f[myInt, bool]` by default.) Which do we pick? Or do we name it `f[underlying[int], bool]`? Or can we give one implementation multiple names? Which name do we show in backtraces, debuggers, profilers?
Another option here is to have the linker do content-based deduplication. Only if the assembly code of two functions is identical, will the implementations be merged. (In fact, this might be a desirable feature independent of generics.) This strategy nicely sidesteps the problem of how to decide whether two implementations can share the same code - we compile both and see. (Comparing assembly for identical behavior is nontrivial, as we would need to recursively compare any symbols referenced by relocations, but the linker already has some ability to do this.)
Idea: can we generate a content hash for each implementation, so the linker can dedup implementations without even loading the implementation into memory?
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/44167-gc-pacer-redesign.md | # GC Pacer Redesign
Author: Michael Knyszek
Updated: 8 February 2021
## Abstract
Go's tracing garbage collector runs concurrently with the application, and thus
requires an algorithm to determine when to start a new cycle.
In the runtime, this algorithm is referred to as the pacer.
Until now, the garbage collector has framed this process as an optimization
problem, utilizing a proportional controller to achieve a desired stopping-point
(that is, the cycle completes just as the heap reaches a certain size) as well
as a desired CPU utilization.
While this approach has served Go well for a long time, it has accrued many
corner cases due to resolved issues, as well as a backlog of unresolved issues.
I propose redesigning the garbage collector's pacer from the ground up to
capture the things it does well and eliminate the problems that have been
discovered.
More specifically, I propose:
1. Including all non-heap sources of GC work (stacks, globals) in pacing
decisions.
1. Reframing the pacing problem as a search problem,
1. Extending the hard heap goal to the worst-case heap goal of the next GC,
(1) will resolve long-standing issues with small heap sizes, allowing the Go
garbage collector to scale *down* and act more predictably in general.
(2) will eliminate offset error present in the current design, will allow
turning off GC assists in the steady-state, and will enable clearer designs for
setting memory limits on Go applications.
(3) will enable smooth and consistent response to large changes in the live heap
size with large `GOGC` values.
## Background
Since version 1.5 Go has had a tracing mark-sweep garbage collector (GC) that is
able to execute concurrently with user goroutines.
The garbage collector manages several goroutines called "workers" to carry out
its task.
A key problem in concurrent garbage collection is deciding when to begin, such
that the work is complete "on time."
Timeliness, today, is defined by the optimization of two goals:
1. The heap size relative to the live heap at the end of the last cycle, and
1. A target CPU utilization for the garbage collector while it is active.
These two goals are tightly related.
If a garbage collection cycle starts too late, for instance, it may consume more
CPU to avoid missing its target.
If a cycle begins too early, it may end too early, resulting in GC cycles
happening more often than expected.
Go's garbage collector sets a fixed target of 30% CPU utilization (25% from GC
workers, with 5% from user goroutines donating their time to assist) while the
GC is active.
It also offers a parameter to allow the user to set their own memory use and CPU
trade-off: `GOGC`.
`GOGC` is a percent overhead describing how much *additional* memory (over the
live heap) the garbage collector may use.
A higher `GOGC` value indicates that the garbage collector may use more memory,
setting the target heap size higher, and conversely a lower `GOGC` value sets
the target heap size lower.
The process of deciding when a garbage collection should start given these
parameters has often been called "pacing" in the Go runtime.
To attempt to reach its goals, Go's "pacer" utilizes a proportional controller
to decide when to start a garbage collection cycle.
The controller attempts to find the correct point to begin directly, given an
error term that captures the two aforementioned optimization goals.
It's worth noting that the optimization goals are defined for some steady-state.
Today, the steady-state is implicitly defined as: constant allocation rate,
constant heap size, and constant heap composition (hence, constant mark rate).
The pacer expects the application to settle on some average global behavior
across GC cycles.
However, the GC is still robust to transient application states.
When the GC is in some transient state, the pacer is often operating with stale
information, and is actively trying to find the new steady-state.
To avoid issues with memory blow-up, among other things, the GC makes allocating
goroutines donate their time to assist the garbage collector, proportionally to
the amount of memory that they allocate.
This GC assist system keeps memory use stable in unstable conditions, at the
expense of user CPU time and latency.
The GC assist system operates by dynamically computing an assist ratio.
The assist ratio is the slope of a curve in the space of allocation time and GC
work time, a curve that the application is required to stay under.
This assist ratio is then used as a conversion factor between the amount a
goroutine has allocated, and how much GC assist work it should do.
Meanwhile, GC workers generate assist credit from the work that they do and
place it in a global pool that allocating goroutines may steal from to avoid
having to assist.
## Motivation
Since version 1.5, the pacer has gone through several minor tweaks and changes
in order to resolve issues, usually adding special cases and making its behavior
more difficult to understand, though resolving the motivating problem.
Meanwhile, more issues have been cropping up that are diagnosed but more
difficult to tackle in the existing design.
Most of these issues are listed in the [GC pacer
meta-issue](https://github.com/golang/go/issues/42430).
Even more fundamentally, the proportional controller at the center of the pacer
is demonstrably unable to completely eliminate error in its scheduling, a
well-known issue with proportional-only controllers.
Another significant motivator, beyond resolving latent issues, is that the Go
runtime lacks facilities for dealing with finite memory limits.
While the `GOGC` mechanism works quite well and has served Go for a long time,
it falls short when there's a hard memory limit.
For instance, a consequence of `GOGC` that often surprises new gophers coming
from languages like Java, is that if `GOGC` is 100, then Go really needs 2x more
memory than the peak live heap size.
The garbage collector will *not* automatically run more aggressively as it
approaches some memory limit, leading to out-of-memory errors.
Conversely, users that know they'll have a fixed amount of memory up-front are
unable to take advantage of it if their live heap is usually small.
Users have taken to fooling the GC into thinking more memory is live than their
application actually needs in order to let the application allocate more memory
in between garbage collections.
Simply increasing `GOGC` doesn't tend to work in this scenario either because of
the previous problem: if the live heap spikes suddenly, `GOGC` will result in
much more memory being used overall.
See issue [#42430](https://github.com/golang/go/issues/42430) for more details.
The current pacer is not designed with these use-cases in mind.
## Design
### Definitions
![Equation 1](44167/eqn1.png)
There is some nuance to these definitions.
Firstly, ![`\gamma`](44167/inl1.png) is used in place of `GOGC` because it makes
the math easier to understand.
Secondly, ![`S_n`](44167/inl2.png) may vary throughout the sweep phase, but
effectively becomes fixed once a GC cycle starts.
Stacks may not shrink, only grow during this time, so there's a chance any value
used by the runtime during a GC cycle will be stale.
![`S_n`](44167/inl2.png) also includes space that may not be actively used for
the stack.
That is, if an 8 KiB goroutine stack is actually only 2 KiB high (and thus only
2 KiB is actually scannable), for consistency's sake the stack's height will be
considered 8 KiB.
Both of these estimates introduce the potential for skew.
In general, however, stacks are roots in the GC and will be some of the first
sources of work for the GC, so the estimate should be fairly close.
If that turns out not to be true in practice, it is possible, though tricky to
track goroutine stack heights more accurately, though there must necessarily
always be some imprecision because actual scannable stack height is rapidly
changing.
Thirdly, ![`G_n`](44167/inl3.png) acts similarly to ![`S_n`](44167/inl2.png).
The amount of global memory in a Go program can change while the application is
running because of the `plugin` package.
This action is relatively rare compared to a change in the size of stacks.
Because of this rarity, I propose allowing a bit of skew.
At worst (as we'll see later) the pacer will overshoot a little bit.
Lastly, ![`M_n`](44167/inl4.png) is the amount of heap memory known to be live
to the runtime the *instant* after a garbage collection cycle completes.
Intuitively, it is the bottom of the classic GC sawtooth pattern.
### Heap goal
Like in the [previous definition of the
pacer](https://docs.google.com/document/d/1wmjrocXIWTr1JxU-3EQBI6BK6KgtiFArkG47XK73xIQ/edit#heading=h.poxawxtiwajr),
the runtime sets some target heap size for the GC cycle based on `GOGC`.
Intuitively, this target heap size is the targeted heap size at the top of the
classic GC sawtooth pattern.
The definition I propose is very similar, except it includes non-heap sources of
GC work.
Let ![`N_n`](44167/inl5.png) be the heap goal for GC ![`n`](44167/inl6.png) ("N"
stands for "Next GC").
![Equation 2](44167/eqn2.png)
The old definition makes the assumption that non-heap sources of GC work are
negligible.
In practice, that is often not true, such as with small heaps.
This definition says that we're trading off not just heap memory, but *all*
memory that influences the garbage collector's CPU consumption.
From a philospical standpoint wherein `GOGC` is intended to be a knob
controlling the trade-off between CPU resources and memory footprint, this
definition is more accurate.
This change has one large user-visible ramification: the default `GOGC`, in most
cases, will use slightly more memory than before.
This change will inevitably cause some friction, but I believe the change is
worth it.
It unlocks the ability to scale *down* to heaps smaller than 4 MiB (the origin
of this limit is directly tied to this lack of accounting).
It also unlocks better behavior in applications with many, or large goroutine
stacks, or very many globals.
That GC work is now accounted for, leading to fewer surprises.
### Deciding when to trigger a GC
Unlike the current pacer, I propose that instead of finding the right point to
start a GC such that the runtime reaches some target in the steady-state, that
the pacer instead searches for a value that is more fundamental, though more
indirect.
Before continuing I want take a moment to point out some very fundamental and
necessary assumptions made in both this design and the current pacer.
Here, we are taking a "macro-economic" view of the Go garbage collector.
The actual behavior of the application at the "micro" level is all about
individual allocations, but the pacer is concerned not with the moment-to-moment
behavior of the application.
Instead, it concerns itself with broad aggregate patterns.
And evidently, this abstraction is useful.
Most programs are not wildly unpredictable in their behavior; in fact it's
somewhat of a challenge to write a useful application that non-trivially has
unpredictable memory allocation behavior, thanks to the law of large numbers.
This observation is why it is useful to talk about the steady-state of an
application *at all*.
The pacer concerns itself with two notions of time: the time it takes to
allocate from the GC trigger point to the heap goal and the time it takes to
find and perform all outstanding GC work.
These are only *notions* of time because the pacer's job is to make them happen
in the *same* amount of time, relative to a wall clock.
Since in the steady-state the amount of GC work (broadly speaking) stays fixed,
the pacer is then concerned with figuring out how early it should start such
that it meets its goal.
Because they should happen in the *same* amount of time, this question of "how
early" is answered in "bytes allocated so far."
So what's this more fundamental value? Suppose we model a Go program as such:
the world is split in two while a GC is active: the application is either
spending time on itself and potentially allocating, or on performing GC work.
This model is "zero-sum," in a sense: if more time is spent on GC work, then
less time is necessarily spent on the application and vice versa.
Given this model, suppose we had two measures of program behavior during a GC
cycle: how often the application is allocating, and how rapidly the GC can scan
and mark memory.
Note that these measure are *not* peak throughput.
They are a measure of the rates, in actuality, of allocation and GC work happens
during a GC.
To give them a concrete unit, let's say they're bytes per cpu-seconds per core.
The idea with this unit is to have some generalized, aggregate notion of this
behavior, independent of available CPU resources.
We'll see why this is important shortly.
Lets call these rates ![`a`](44167/inl7.png) and ![`s`](44167/inl8.png)
respectively.
In the steady-state, these rates aren't changing, so we can use them to predict
when to start a garbage collection.
Coming back to our model, some amount of CPU time is going to go to each of
these activities.
Let's say our target GC CPU utilization in the steady-state is
![`u_t`](44167/inl9.png).
If ![`C`](44167/inl10.png) is the number of CPU cores available and
![`t`](44167/inl11.png) is some wall-clock time window, then
![`a(1-u_t)Ct`](44167/inl12.png) bytes will be allocated and ![`s u_t
Ct`](44167/inl13.png) bytes will be scanned in that window.
Notice that *ratio* of "bytes allocated" to "bytes scanned" is constant in the
steady-state in this model, because both ![`a`](44167/inl7.png) and
![`s`](44167/inl8.png) are constant.
Let's call this ratio ![`r`](44167/inl14.png).
To make things a little more general, let's make ![`r`](44167/inl14.png) also a
function of utilization ![`u`](44167/inl15.png), because part of the Go garbage
collector's design is the ability to dynamically change CPU utilization to keep
it on-pace.
![Equation 3](44167/eqn3.png)
The big idea here is that this value, ![`r(u)`](44167/inl16.png) is a
*conversion rate* between these two notions of time.
Consider the following: in the steady-state, the runtime can perfectly back out
the correct time to start a GC cycle, given that it knows exactly how much work
it needs to do.
Let ![`T_n`](44167/inl17.png) be the trigger point for GC cycle
![`n`](44167/inl6.png).
Let ![`P_n`](44167/inl18.png) be the size of the live *scannable* heap at the
end of GC ![`n`](44167/inl6.png).
More precisely, ![`P_n`](44167/inl18.png) is the subset of
![`M_n`](44167/inl4.png) that contains pointers.
Why include only pointer-ful memory? Because GC work is dominated by the cost of
the scan loop, and each pointer that is found is marked; memory containing Go
types without pointers are never touched, and so are totally ignored by the GC.
Furthermore, this *does* include non-pointers in pointer-ful memory, because
scanning over those is a significant cost in GC, enough so that GC is roughly
proportional to it, not just the number of pointer slots.
In the steady-state, the size of the scannable heap should not change, so
![`P_n`](44167/inl18.png) remains constant.
![Equation 4](44167/eqn4.png)
That's nice, but we don't know ![`r`](44167/inl14.png) while the runtime is
executing.
And worse, it could *change* over time.
But if the Go runtime can somehow accurately estimate and predict
![`r`](44167/inl14.png) then it can find a steady-state.
Suppose we had some prediction of ![`r`](44167/inl14.png) for GC cycle
![`n`](44167/inl6.png) called ![`r_n`](44167/inl19.png).
Then, our trigger condition is a simple extension of the formula above.
Let ![`A`](44167/inl20.png) be the size of the Go live heap at any given time.
![`A`](44167/inl20.png) is thus monotonically increasing during a GC cycle, and
then instantaneously drops at the end of the GC cycle.
In essence ![`A`](44167/inl20.png) *is* the classic GC sawtooth pattern.
![Equation 5](44167/eqn5.png)
Note that this formula is in fact a *condition* and not a predetermined trigger
point, like the trigger ratio.
In fact, this formula could transform into the previous formula for
![`T_n`](44167/inl17.png) if it were not for the fact that
![`S_n`](44167/inl2.png) actively changes during a GC cycle, since the rest of
the values are constant for each GC cycle.
A big question remains: how do we predict ![`r`](44167/inl14.png)?
To answer that, we first need to determine how to measure
![`r`](44167/inl14.png) at all.
I propose a straightforward approximation: each GC cycle, take the amount of
memory allocated, divide it by the amount of memory scanned, and scale it from
the actual GC CPU utilization to the target GC CPU utilization.
Note that this scaling factor is necessary because we want our trigger to use an
![`r`](44167/inl14.png) value that is at the target utilization, such that the
GC is given enough time to *only* use that amount of CPU.
This note is a key aspect of the proposal and will come up later.
What does this scaling factor look like? Recall that because of our model, any
value of ![`r`](44167/inl14.png) has a ![`1-u`](44167/inl21.png) factor in the
numerator and a ![`u`](44167/inl15.png) factor in the denominator.
Scaling from one utilization to another is as simple as switching out factors.
Let ![`\hat{A}_n`](44167/inl22.png) be the actual peak live heap size at the end
of a GC cycle (as opposed to ![`N_n`](44167/inl5.png), which is only a target).
Let ![`u_n`](44167/inl23.png) be the GC CPU utilization over cycle
![`n`](44167/inl6.png) and ![`u_t`](44167/inl9.png) be the target utilization.
Altogether,
![Equation 6](44167/eqn6.png)
Now that we have a way to measure ![`r`](44167/inl14.png), we could use this
value directly as our prediction.
But I fear that using it directly has the potential to introduce a significant
amount of noise, so smoothing over transient changes to this value is desirable.
To do so, I propose using this measurement as the set-point for a
proportional-integral (PI) controller.
This means that the PI controller is always chasing whatever value was measured
for each GC cycle.
In a steady-state with only small changes, this means the PI controller acts as
a smoothing function.
The advantage of a PI controller over a proportional controller is that it
guarantees that steady-state error will be driven to zero.
Note that the current GC pacer has issues with offset error.
It may also find the wrong point on the isocline of GC CPU utilization and peak
heap size because the error term can go to zero even if both targets are missed.
The disadvantage of a PI controller, however, is that it oscillates and may
overshoot significantly on its way to reaching a steady value.
This disadvantage could be mitigated by overdamping the controller, but I
propose we tune it using the tried-and-tested standard Ziegler-Nichols method.
In simulations (see [the simulations section](#simulations)) this tuning tends
to work well.
It's worth noting that PI (more generally, PID controllers) have a lot of years
of research and use behind them, and this design lets us take advantage of that
and tune the pacer further if need be.
Why a PI controller and not a PID controller?
Firstly, PI controllers are simpler to reason about.
Secondly, the derivative term in a PID controller tends to be sensitive to
high-frequency noise, and the entire point here is to smooth over noise.
Furthermore, the advantage of the derivative term is a shorter rise time, but
simulations show that the rise time is roughly 1 GC cycle, so I don't think
there's much reason to include it.
Adding the derivative term though is trivial once the rest of the design is in
place, so the door is always open.
By focusing on this ![`r`](44167/inl14.png) value, we've now reframed the pacing
problem as a search problem instead of an optimization problem.
That raises question: are we still reaching our optimization goals? And how do
GC assists fit into this picture?
The good news is that we're always triggering for the right CPU utilization.
Because ![`r`](44167/inl14.png) being scaled for the *target* GC CPU utilization
and ![`r`](44167/inl14.png) picks the trigger, the pacer will naturally start at
a point that will generate a certain utilization in the steady-state.
Following from this fact, there is no longer any reason to have the target GC
CPU utilization be 30%.
Originally, in the design for the current pacer, the target GC CPU utilization,
began at 25%, with GC assists always *extending* from that, so in the
steady-state there would be no GC assists.
However, because the pacer was structured to solve an optimization problem, it
required feedback from both directions.
That is, it needed to know whether it was actinng too aggressively *or* not
aggressively enough.
This feedback could only be obtained by actually performing GC assists.
But with this design, that's no longer necessary.
The target CPU utilization can completely exclude GC assists in the steady-state
with a mitigated risk of bad behavior.
As a result, I propose the target utilization be reduced once again to 25%,
eliminating GC assists in the steady-state (that's not out-pacing the GC), and
potentially improving application latency as a result.
#### Idle-priority background GC workers
Idle-priority GC workers are extra workers that run if the application isn't
utilizing all `GOMAXPROCS` worth of parallelism.
The scheduler schedules "low priority" background workers on any additional CPU
resources, and this ultimately skews utilization measurements in the GC.
In today's pacer, they're somewhat accounted for.
In general, the idle workers skew toward undershoot: because the pacer is not
explicitly aware of the idle workers, GC cycles will complete sooner than it
might expect.
However if a GC cycle completes early, then in theory the current pacer will
simply adjust.
From its perspective, scanning and marking objects was just faster than
expected.
The new proposed design must also account for them somehow.
I propose including idle priority worker CPU utilization in both the measured
utilization, ![`u_n`](44167/inl23.png), and the target utilization,
![`u_t`](44167/inl9.png), in each GC cycle.
In this way, the only difference between the two terms remains GC assist
utilization, and while idle worker CPU utilization remains stable, the pacer may
effectively account for it.
Unfortunately this does mean idle-priority GC worker utilization becomes part of
the definition of a GC steady-state, making it slightly more fragile.
The good news is that that was already true.
Due to other issues with idle-priority GC workers, it may be worth revisiting
them as a concept, but they do appear to legitimately help certain applications,
particularly those that spend a significant chunk of time blocked on I/O, yet
spend some significant chunk of their time awake allocating memory.
### Smoothing out GC assists
This discussion of GC assists brings us to the existing issues around pacing
decisions made *while* the GC is active (which I will refer to as the "GC assist
pacer" below).
For the most part, this system works very well, and is able to smooth over small
hiccups in performance, due to noise from the underlying platform or elsewhere.
Unfortunately, there's one place where it doesn't do so well: the hard heap
goal.
Currently, the GC assist pacer prevents memory blow-up in pathological cases by
ramping up assists once either the GC has found more work than it expected (i.e.
the live scannable heap has grown) or the GC is behind and the application's
heap size has exceeded the heap goal.
In both of these cases, it sets a somewhat arbitrarily defined hard limit at
1.1x the heap goal.
The problem with this policy is that high `GOGC` values create the opportunity
for very large changes in live heap size, because the GC has quite a lot of
runway (consider an application with `GOGC=51100` has a steady-state live heap
of size 10 MiB and suddenly all the memory it allocates is live).
In this case, the GC assist pacer is going to find all this new live memory and
panic: the rate of assists will begin to skyrocket.
This particular problem impedes the adoption of any sort of target heap size, or
configurable minimum heap size.
One can imagine a small live heap with a large target heap size as having a
large *effective* `GOGC` value, so it reduces to exactly the same case.
To deal with this, I propose modifying the GC assist policy to set a hard heap
goal of ![`\gamma N_n`](44167/inl24.png).
The intuition behind this goal is that if *all* the memory allocated in this GC
cycle turns out to be live, the *next* GC cycle will end up using that much
memory *anyway*, so we let it slide.
But this hard goal need not be used for actually pacing GC assists other than in
extreme cases.
In fact, it must not, because an assist ratio computed from this hard heap goal
and the worst-case scan work turns out to be extremely loose, leading to the GC
assist pacer consistently missing the heap goal in some steady-states.
So, I propose an alternative calculation for the assist ratio.
I believe that the assist ratio must always pass through the heap goal, since
otherwise there's no guarantee that the GC meets its heap goal in the
steady-state (which is a fundamental property of the pacer in Go's existing GC
design).
However, there's no reason why the ratio itself needs to change dramatically
when there's more GC work than expected.
In fact, the preferable case is that it does not, because that lends itself to a
much more even distribution of GC assist work across the cycle.
So, I propose that the assist ratio be an extrapolation of the current
steady-state assist ratio, with the exception that it now include non-heap GC
work as the rest of this document does.
That is,
![Equation 7](44167/eqn7.png)
This definition is intentially roundabout.
The assist ratio changes dynamically as the amount of GC work left decreases and
the amount of memory allocated increases.
This responsiveness is what allows the pacing mechanism to be so robust.
Today, the assist ratio is calculated by computing the remaining heap runway and
the remaining expected GC work, and dividing the former by the latter.
But of course, that's not possible if there's more GC work than expected, since
then the assist ratio could go negative, which is meaningless.
So that's the purpose defining the "max scan work" and "extrapolated runway":
these are worst-case values that are always safe to subtract from, such that we
can maintain roughly the same assist ratio throughout the cycle (assuming no
hiccups).
One minor details is that the "extrapolated runway" needs to be capped at the
hard heap goal to prevent breaking that promise, though in practice this will
almost.
The hard heap goal is such a loose bound that it's really only useful in
pathological cases, but it's still necessary to ensure robustness.
A key point in this choice is that the GC assist pacer will *only* respond to
changes in allocation behavior and scan rate, not changes in the *size* of the
live heap.
This point seems minor, but it means the GC assist pacer's function is much
simpler and more predictable.
## Remaining unanswered questions
Not every problem listed in issue
[#42430](https://github.com/golang/go/issues/42430) is resolved by this design,
though many are.
Notable exclusions are:
1. Mark assists are front-loaded in a GC cycle.
1. The hard heap goal isn't actually hard in some circumstances.
1. Existing trigger limits to prevent unbounded memory growth.
(1) is difficult to resolve without special cases and arbitrary heuristics, and
I think in practice it's OK; the system was fairly robust and will now be more
so to this kind of noise.
That doesn't mean that it shouldn't be revisited, but it's not quite as big as
the other problems, so I leave it outside the scope of this proposal.
(2) is also tricky and somewhat orthogonal.
I believe the path forward there involves better scheduling of fractional GC
workers, which are currently very loosely scheduled.
This design has made me realize how important dedicated GC workers are to
progress, and how GC assists are a back-up mechanism.
I believe that the fundamental problem there lies with the fact that fractional
GC workers don't provide that sort of consistent progress.
For (3), I propose we retain the limits, translated to the current design.
For reference, these limits are ![`0.95 (\gamma - 1)`](44167/inl25.png) as the
upper-bound on the trigger ratio, and ![`0.6 (\gamma - 1)`](44167/inl26.png) as
the lower-bound.
The upper bound exists to prevent ever starting the GC too late in low-activity
scenarios.
It may cause consistent undershoot, but prevents issues in GC pacer calculations
by preventing the calculated runway from ever being too low.
The upper-bound may need to be revisited when considering a configurable target
heap size.
The lower bound exists to prevent the application from causing excessive memory
growth due to floating garbage as the application's allocation rate increases.
Before that limit was installed, it wasn't very easy for an application to
allocate hard enough for that to happen.
The lower bound probably should be revisited, but I leave that outside of the
scope of this document.
To translate them to the current design, I propose we simply modify the trigger
condition to include these limits.
It's not important to put these limits in the rest of the pacer because it no
longer tries to compute the trigger point ahead of time.
### Initial conditions
Like today, the pacer has to start somewhere for the first GC.
I propose we carry forward what we already do today: set the trigger point at
7/8ths of the first heap goal, which will always be the minimum heap size.
If GC 1 is the first GC, then in terms of the math above, we choose to avoid
defining ![`M_0`](44167/inl27.png), and instead directly define
![Equation 8](44167/eqn8.png)
The definition of ![`P_0`](44167/inl28.png) is necessary for the GC assist
pacer.
Furthermore, the PI controller's state will be initialized to zero otherwise.
These choices are somewhat arbitrary, but the fact is that the pacer has no
knowledge of the progam's past behavior for the first GC.
Naturally the behavior of the GC will always be a little odd, but it should, in
general, stabilize quite quickly (note that this is the case in each scenario
for the [simulations](#simulations).
## A note about CPU utilization
This document uses the term "GC CPU utilization" quite frequently, but so far
has refrained from defining exactly how it's measured.
Before doing that, let's define CPU utilization over a GC mark phase, as it's
been used so far.
First, let's define the mark phase: it is the period of wall-clock time between
the end of sweep termination and the start of mark termination.
In the mark phase, the process will have access to some total number of
CPU-seconds of execution time.
This CPU time can then be divided into "time spent doing GC work" and "time
spent doing anything else."
GC CPU utilization is then defined as a proportion of that total CPU time that
is spent doing GC work.
This definition seems straightforward enough but in reality it's more
complicated.
Measuring CPU time on most platforms is tricky, so what Go does today is an
approximation: take the wall-clock time of the GC mark phase, multiply it by
`GOMAXPROCS`.
Call this $`T`$.
Take 25% of that (representing the dedicated GC workers) and add total amount of
time all goroutines spend in GC assists.
The latter is computed directly, but is just the difference between the start
and end time in the critical section; it does not try to account for context
switches forced by the underlying system, or anything like that.
Now take this value we just computed and divide it by ![`T`](44167/inl29.png).
That's our GC CPU utilization.
This approximation is mostly accurate in the common case, but is prone to skew
in various scenarios, such as when the system is CPU-starved.
This fact can be problematic, but I believe it is largely orthogonal to the
content of this document; we can work on improving this approximation without
having to change any of this design.
It already assumes that we have a good measure of CPU utilization.
## Alternatives considered
The alternatives considered for this design basically boil down to its
individual components.
For instance, I considered grouping stacks and globals into the current
formulation of the pacer, but that adds another condition to the definition of
the steady-state: stacks and globals do not change.
That makes the steady-state more fragile.
I also considered a design that was similar, but computed everything in terms of
an "effective" `GOGC`, and "converted" that back to `GOGC` for pacing purposes
(that is, what would the heap trigger have been had the expected amount of live
memory been correct?).
This formulation is similar to how Austin formulated the experimental
`SetMaxHeap` API.
Austin suggested I avoid this formulation because math involving `GOGC` tends to
have to work around infinities.
A good example of this is if `runtime.GC` is called while `GOGC` is off: the
runtime has to "fake" a very large `GOGC` value in the pacer.
By using a ratio of rates that's more grounded in actual application behavior
the trickiness of the math is avoided.
I also considered not using a PI controller and just using the measured
![`r`](44167/inl14.png) value directly, assuming it doesn't change across GC
cycles, but that method is prone to noise.
## Justification
Pros:
- The steady-state is now independent of the amount of GC work to be done.
- Steady-state mark assist drops to zero if not allocating too heavily (a likely
latency improvement in many scenarios) (see the "high `GOGC`" scenario in
[simulations](#simulations)).
- GC amortization includes non-heap GC work, and responds well in those cases.
- Eliminates offset error present in the existing design.
Cons:
- Definition of `GOGC` has changed slightly, so a `GOGC` of 100 will use
slightly more memory in nearly all cases.
- ![`r`](44167/inl14.png) is a little bit unintuitive.
## Implementation
This pacer redesign will be implemented by Michael Knyszek.
1. The existing pacer will be refactored into a form fit for simulation.
1. A comprehensive simulation-based test suite will be written for the pacer.
1. The existing pacer will be swapped out with the new implementation.
The purpose of the simulation infrastructure is to make the pacer, in general,
more testable.
This lets us write regression test cases based on real-life problems we
encounter and confirm that they don't break going forward.
Furthermore, with fairly large changes to the Go compiler and runtime in the
pipeline, it's especially important to reduce the risk of this change as much as
possible.
## Go 1 backwards compatibility
This change will not modify any user-visible APIs, but may have surprising
effects on application behavior.
The two "most" incompatible changes in this proposal are the redefinition of the
heap goal to include non-heap sources of GC work, since that directly influences
the meaning of `GOGC`, and the change in target GC CPU utilization.
These two factors together mean that, by default and on average, Go applications
will use slightly more memory than before.
To obtain previous levels of memory usage, users may be required to tune down
`GOGC` lower manually, but the overall result should be more consistent, more
predictable, and more efficient.
## Simulations
In order to show the effectiveness of the new pacer and compare it to the
current one, I modeled both the existing pacer and the new pacer and simulated
both in a variety of scenarios.
The code used to run these simulations and generate the plots below may be found
at [github.com/mknyszek/pacer-model](https://github.com/mknyszek/pacer-model).
### Assumptions and caveats
The model of each pacer is fairly detailed, and takes into account most details
like allocations made during a GC being marked.
The one big assumption it makes, however, is that the behavior of the
application while a GC cycle is running is perfectly smooth, such that the GC
assist pacer is perfectly paced according to the initial assist ratio.
In practice, this is close to true, but it's worth accounting for the more
extreme cases.
(TODO: Show simulations that inject some noise into the GC assist pacer.)
Another caveat with the simulation is the graph of "R value" (that is,
![`r_n`](44167/inl19.png)), and "Alloc/scan ratio."
The latter is well-defined for all simulations (it's a part of the input) but
the former is not a concept used in the current pacer.
So for simulations of the current pacer, the "R value" is backed out from the
trigger ratio: we know the runway, we know the *expected* scan work for the
target utilization, so we can compute the ![`r_n`](44167/inl19.png) that the
trigger point encodes.
### Results
**Perfectly steady heap size.**
The simplest possible scenario.
Current pacer:
![](44167/pacer-plots/old-steady.png)
New pacer:
![](44167/pacer-plots/new-steady.png)
Notes:
- The current pacer doesn't seem to find the right utilization.
- Both pacers do reasonably well at meeting the heap goal.
**Jittery heap size and alloc/scan ratio.**
A mostly steady-state heap with a slight jitter added to both live heap size and
the alloc/scan ratio.
Current pacer:
![](44167/pacer-plots/old-jitter-alloc.png)
New pacer:
![](44167/pacer-plots/new-jitter-alloc.png)
Notes:
- Both pacers are resilient to a small amount of noise.
**Small step in alloc/scan ratio.**
This scenario demonstrates the transitions between two steady-states, that are
not far from one another.
Current pacer:
![](44167/pacer-plots/old-step-alloc.png)
New pacer:
![](44167/pacer-plots/new-step-alloc.png)
Notes:
- Both pacers react to the change in alloc/scan rate.
- Clear oscillations in utilization visible for the new pacer.
**Large step in alloc/scan ratio.**
This scenario demonstrates the transitions between two steady-states, that are
further from one another.
Current pacer:
![](44167/pacer-plots/old-heavy-step-alloc.png)
New pacer:
![](44167/pacer-plots/new-heavy-step-alloc.png)
Notes:
- The old pacer consistently overshoots the heap size post-step.
- The new pacer minimizes overshoot.
**Large step in heap size with a high `GOGC` value.**
This scenario demonstrates the "high `GOGC` problem" described in the [GC pacer
meta-issue](https://github.com/golang/go/issues/42430).
Current pacer:
![](44167/pacer-plots/old-high-GOGC.png)
New pacer:
![](44167/pacer-plots/new-high-GOGC.png)
Notes:
- The new pacer's heap size stabilizes faster than the old pacer's.
- The new pacer has a spike in overshoot; this is *by design*.
- The new pacer's utilization is independent of this heap size spike.
- The old pacer has a clear spike in utilization.
**Oscillating alloc/scan ratio.**
This scenario demonstrates an oscillating alloc/scan ratio.
This scenario is interesting because it shows a somewhat extreme case where a
steady-state is never actually reached for any amount of time.
However, this is not a realistic scenario.
Current pacer:
![](44167/pacer-plots/old-osc-alloc.png)
New pacer:
![](44167/pacer-plots/new-osc-alloc.png)
Notes:
- The new pacer tracks the oscillations worse than the old pacer.
This is likely due to the error never settling, so the PI controller is always
overshooting.
**Large amount of goroutine stacks.**
This scenario demonstrates the "heap amortization problem" described in the [GC
pacer meta-issue](https://github.com/golang/go/issues/42430) for goroutine
stacks.
Current pacer:
![](44167/pacer-plots/old-big-stacks.png)
New pacer:
![](44167/pacer-plots/new-big-stacks.png)
Notes:
- The old pacer consistently overshoots because it's underestimating the amount
of work it has to do.
- The new pacer uses more memory, since the heap goal is now proportional to
stack space, but it stabilizes and is otherwise sane.
**Large amount of global variables.**
This scenario demonstrates the "heap amortization problem" described in the [GC
pacer meta-issue](https://github.com/golang/go/issues/42430) for global
variables.
Current pacer:
![](44167/pacer-plots/old-big-globals.png)
New pacer:
![](44167/pacer-plots/new-big-globals.png)
Notes:
- This is essentially identical to the stack space case.
**High alloc/scan ratio.**
This scenario shows the behavior of each pacer in the face of a very high
alloc/scan ratio, with jitter applied to both the live heap size and the
alloc/scan ratio.
Current pacer:
![](44167/pacer-plots/old-heavy-jitter-alloc.png)
New pacer:
![](44167/pacer-plots/new-heavy-jitter-alloc.png)
Notes:
- In the face of a very high allocation rate, the old pacer consistently
overshoots, though both maintain a similar GC CPU utilization.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/14313-benchmark-format.md | # Proposal: Go Benchmark Data Format
Authors: Russ Cox, Austin Clements
Last updated: February 2016
Discussion at [golang.org/issue/14313](https://golang.org/issue/14313).
## Abstract
We propose to make the current output of `go test -bench` the defined format for recording all Go benchmark data.
Having a defined format allows benchmark measurement programs
and benchmark analysis programs to interoperate while
evolving independently.
## Background
### Benchmark data formats
We are unaware of any standard formats for recording raw benchmark data,
and we've been unable to find any using web searches.
One might expect that a standard benchmark suite such as SPEC CPU2006 would have
defined a format for raw results, but that appears not to be the case.
The [collection of published results](https://www.spec.org/cpu2006/results/)
includes only analyzed data ([example](https://www.spec.org/cpu2006/results/res2011q3/cpu2006-20110620-17230.txt)), not raw data.
Go has a de facto standard format for benchmark data:
the lines generated by the testing package when using `go test -bench`.
For example, running compress/flate's benchmarks produces this output:
BenchmarkDecodeDigitsSpeed1e4-8 100 154125 ns/op 64.88 MB/s 40418 B/op 7 allocs/op
BenchmarkDecodeDigitsSpeed1e5-8 10 1367632 ns/op 73.12 MB/s 41356 B/op 14 allocs/op
BenchmarkDecodeDigitsSpeed1e6-8 1 13879794 ns/op 72.05 MB/s 52056 B/op 94 allocs/op
BenchmarkDecodeDigitsDefault1e4-8 100 147551 ns/op 67.77 MB/s 40418 B/op 8 allocs/op
BenchmarkDecodeDigitsDefault1e5-8 10 1197672 ns/op 83.50 MB/s 41508 B/op 13 allocs/op
BenchmarkDecodeDigitsDefault1e6-8 1 11808775 ns/op 84.68 MB/s 53800 B/op 80 allocs/op
BenchmarkDecodeDigitsCompress1e4-8 100 143348 ns/op 69.76 MB/s 40417 B/op 8 allocs/op
BenchmarkDecodeDigitsCompress1e5-8 10 1185527 ns/op 84.35 MB/s 41508 B/op 13 allocs/op
BenchmarkDecodeDigitsCompress1e6-8 1 11740304 ns/op 85.18 MB/s 53800 B/op 80 allocs/op
BenchmarkDecodeTwainSpeed1e4-8 100 143665 ns/op 69.61 MB/s 40849 B/op 15 allocs/op
BenchmarkDecodeTwainSpeed1e5-8 10 1390359 ns/op 71.92 MB/s 45700 B/op 31 allocs/op
BenchmarkDecodeTwainSpeed1e6-8 1 12128469 ns/op 82.45 MB/s 89336 B/op 221 allocs/op
BenchmarkDecodeTwainDefault1e4-8 100 141916 ns/op 70.46 MB/s 40849 B/op 15 allocs/op
BenchmarkDecodeTwainDefault1e5-8 10 1076669 ns/op 92.88 MB/s 43820 B/op 28 allocs/op
BenchmarkDecodeTwainDefault1e6-8 1 10106485 ns/op 98.95 MB/s 71096 B/op 172 allocs/op
BenchmarkDecodeTwainCompress1e4-8 100 138516 ns/op 72.19 MB/s 40849 B/op 15 allocs/op
BenchmarkDecodeTwainCompress1e5-8 10 1227964 ns/op 81.44 MB/s 43316 B/op 25 allocs/op
BenchmarkDecodeTwainCompress1e6-8 1 10040347 ns/op 99.60 MB/s 72120 B/op 173 allocs/op
BenchmarkEncodeDigitsSpeed1e4-8 30 482808 ns/op 20.71 MB/s
BenchmarkEncodeDigitsSpeed1e5-8 5 2685455 ns/op 37.24 MB/s
BenchmarkEncodeDigitsSpeed1e6-8 1 24966055 ns/op 40.05 MB/s
BenchmarkEncodeDigitsDefault1e4-8 20 655592 ns/op 15.25 MB/s
BenchmarkEncodeDigitsDefault1e5-8 1 13000839 ns/op 7.69 MB/s
BenchmarkEncodeDigitsDefault1e6-8 1 136341747 ns/op 7.33 MB/s
BenchmarkEncodeDigitsCompress1e4-8 20 668083 ns/op 14.97 MB/s
BenchmarkEncodeDigitsCompress1e5-8 1 12301511 ns/op 8.13 MB/s
BenchmarkEncodeDigitsCompress1e6-8 1 137962041 ns/op 7.25 MB/s
The testing package always reports ns/op, and each benchmark can request the addition of MB/s (throughput) and also B/op and allocs/op (allocation rates).
### Benchmark processors
Multiple tools have been written that process this format,
most notably [benchcmp](https://godoc.org/golang.org/x/tools/cmd/benchcmp)
and its more statistically valid successor [benchstat](https://godoc.org/rsc.io/benchstat).
There is also [benchmany](https://godoc.org/github.com/aclements/go-misc/benchmany)'s plot subcommand
and likely more unpublished programs.
### Benchmark runners
Multiple tools have also been written that generate this format.
In addition to the standard Go testing package,
[compilebench](https://godoc.org/rsc.io/compilebench)
generates this data format based on runs of the Go compiler,
and Austin's unpublished shellbench generates this data format
after running an arbitrary shell command.
The [golang.org/x/benchmarks/bench](https://golang.org/x/benchmarks/bench) benchmarks
are notable for _not_ generating this format,
which has made all analysis of those results
more complex than we believe it should be.
We intend to update those benchmarks to generate the standard format,
once a standard format is defined.
Part of the motivation for the proposal is to avoid
the need to process custom output formats in future benchmarks.
## Proposal
A Go benchmark data file is a UTF-8 textual file consisting of a sequence of lines.
Configuration lines and benchmark result lines, described below,
have semantic meaning in the reporting of benchmark results.
All other lines in the data file, including but not limited to
blank lines and lines beginning with a # character, are ignored.
For example, the testing package prints test results above benchmark data,
usually the text `PASS`. That line is neither a configuration line nor a benchmark
result line, so it is ignored.
### Configuration Lines
A configuration line is a key-value pair of the form
key: value
where key begins with a lower case character (as defined by `unicode.IsLower`),
contains no space characters (as defined by `unicode.IsSpace`)
nor upper case characters (as defined by `unicode.IsUpper`),
and one or more ASCII space or tab characters separate “key:” from “value.”
Conventionally, multiword keys are written with the words
separated by hyphens, as in cpu-speed.
There are no restrictions on value, except that it cannot contain a newline character.
Value can be omitted entirely, in which case the colon must still be
present, but need not be followed by a space.
The interpretation of a key/value pair is up to tooling, but the key/value pair
is considered to describe all benchmark results that follow,
until overwritten by a configuration line with the same key.
### Benchmark Results
A benchmark result line has the general form
<name> <iterations> <value> <unit> [<value> <unit>...]
The fields are separated by runs of space characters (as defined by `unicode.IsSpace`),
so the line can be parsed with `strings.Fields`.
The line must have an even number of fields, and at least four.
The first field is the benchmark name, which must begin with `Benchmark`
followed by an upper case character (as defined by `unicode.IsUpper`)
or the end of the field,
as in `BenchmarkReverseString` or just `Benchmark`.
Tools displaying benchmark data conventionally omit the `Benchmark` prefix.
The same benchmark name can appear on multiple result lines,
indicating that the benchmark was run multiple times.
The second field gives the number of iterations run.
For most processing this number can be ignored, although
it may give some indication of the expected accuracy
of the measurements that follow.
The remaining fields report value/unit pairs in which the value
is a float64 that can be parsed by `strconv.ParseFloat`
and the unit explains the value, as in “64.88 MB/s”.
The units reported are typically normalized so that they can be
interpreted without considering to the number of iterations.
In the example, the CPU cost is reported per-operation and the
throughput is reported per-second; neither is a total that
depends on the number of iterations.
### Value Units
A value's unit string is expected to specify not only the measurement unit
but also, as needed, a description of what is being measured.
For example, a benchmark might report its overall execution time
as well as cache miss times with three units “ns/op,” “L1-miss-ns/op,”and “L2-miss-ns/op.”
Tooling can expect that the unit strings are identical for all runs to be compared;
for example, a result reporting “ns/op” need not be considered comparable
to one reporting “µs/op.”
However, tooling may assume that the measurement unit is the final
of the hyphen-separated words in the unit string and may recognize
and rescale known measurement units.
For example, consistently large “ns/op” or “L1-miss-ns/op”
might be rescaled to “ms/op” or “L1-miss-ms/op” for display.
### Benchmark Name Configuration
In the current testing package, benchmark names correspond to Go identifiers:
each benchmark must be written as a different Go function.
[Work targeted for Go 1.7](https://github.com/golang/proposal/blob/master/design/12166-subtests.md) will allow tests and benchmarks
to define sub-tests and sub-benchmarks programatically,
in particular to vary interesting parameters both when
testing and when benchmarking.
That work uses a slash to separate the name of a benchmark
collection from the description of a sub-benchmark.
We propose that sub-benchmarks adopt the convention of
choosing names that are key=value pairs;
that slash-prefixed key=value pairs in the benchmark name are
treated by benchmark data processors as per-benchmark
configuration values.
### Example
The benchmark output given in the background section above
is already in the format proposed here.
That is a key feature of the proposal.
However, a future run of the benchmark might add configuration lines,
and the benchmark might be rewritten to use sub-benchmarks,
producing this output:
commit: 7cd9055
commit-time: 2016-02-11T13:25:45-0500
goos: darwin
goarch: amd64
cpu: Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz
cpu-count: 8
cpu-physical-count: 4
os: Mac OS X 10.11.3
mem: 16 GB
BenchmarkDecode/text=digits/level=speed/size=1e4-8 100 154125 ns/op 64.88 MB/s 40418 B/op 7 allocs/op
BenchmarkDecode/text=digits/level=speed/size=1e5-8 10 1367632 ns/op 73.12 MB/s 41356 B/op 14 allocs/op
BenchmarkDecode/text=digits/level=speed/size=1e6-8 1 13879794 ns/op 72.05 MB/s 52056 B/op 94 allocs/op
BenchmarkDecode/text=digits/level=default/size=1e4-8 100 147551 ns/op 67.77 MB/s 40418 B/op 8 allocs/op
BenchmarkDecode/text=digits/level=default/size=1e5-8 10 1197672 ns/op 83.50 MB/s 41508 B/op 13 allocs/op
BenchmarkDecode/text=digits/level=default/size=1e6-8 1 11808775 ns/op 84.68 MB/s 53800 B/op 80 allocs/op
BenchmarkDecode/text=digits/level=best/size=1e4-8 100 143348 ns/op 69.76 MB/s 40417 B/op 8 allocs/op
BenchmarkDecode/text=digits/level=best/size=1e5-8 10 1185527 ns/op 84.35 MB/s 41508 B/op 13 allocs/op
BenchmarkDecode/text=digits/level=best/size=1e6-8 1 11740304 ns/op 85.18 MB/s 53800 B/op 80 allocs/op
BenchmarkDecode/text=twain/level=speed/size=1e4-8 100 143665 ns/op 69.61 MB/s 40849 B/op 15 allocs/op
BenchmarkDecode/text=twain/level=speed/size=1e5-8 10 1390359 ns/op 71.92 MB/s 45700 B/op 31 allocs/op
BenchmarkDecode/text=twain/level=speed/size=1e6-8 1 12128469 ns/op 82.45 MB/s 89336 B/op 221 allocs/op
BenchmarkDecode/text=twain/level=default/size=1e4-8 100 141916 ns/op 70.46 MB/s 40849 B/op 15 allocs/op
BenchmarkDecode/text=twain/level=default/size=1e5-8 10 1076669 ns/op 92.88 MB/s 43820 B/op 28 allocs/op
BenchmarkDecode/text=twain/level=default/size=1e6-8 1 10106485 ns/op 98.95 MB/s 71096 B/op 172 allocs/op
BenchmarkDecode/text=twain/level=best/size=1e4-8 100 138516 ns/op 72.19 MB/s 40849 B/op 15 allocs/op
BenchmarkDecode/text=twain/level=best/size=1e5-8 10 1227964 ns/op 81.44 MB/s 43316 B/op 25 allocs/op
BenchmarkDecode/text=twain/level=best/size=1e6-8 1 10040347 ns/op 99.60 MB/s 72120 B/op 173 allocs/op
BenchmarkEncode/text=digits/level=speed/size=1e4-8 30 482808 ns/op 20.71 MB/s
BenchmarkEncode/text=digits/level=speed/size=1e5-8 5 2685455 ns/op 37.24 MB/s
BenchmarkEncode/text=digits/level=speed/size=1e6-8 1 24966055 ns/op 40.05 MB/s
BenchmarkEncode/text=digits/level=default/size=1e4-8 20 655592 ns/op 15.25 MB/s
BenchmarkEncode/text=digits/level=default/size=1e5-8 1 13000839 ns/op 7.69 MB/s
BenchmarkEncode/text=digits/level=default/size=1e6-8 1 136341747 ns/op 7.33 MB/s
BenchmarkEncode/text=digits/level=best/size=1e4-8 20 668083 ns/op 14.97 MB/s
BenchmarkEncode/text=digits/level=best/size=1e5-8 1 12301511 ns/op 8.13 MB/s
BenchmarkEncode/text=digits/level=best/size=1e6-8 1 137962041 ns/op 7.25 MB/s
Using sub-benchmarks has benefits beyond this proposal, namely that it would
avoid the current repetitive code:
func BenchmarkDecodeDigitsSpeed1e4(b *testing.B) { benchmarkDecode(b, digits, speed, 1e4) }
func BenchmarkDecodeDigitsSpeed1e5(b *testing.B) { benchmarkDecode(b, digits, speed, 1e5) }
func BenchmarkDecodeDigitsSpeed1e6(b *testing.B) { benchmarkDecode(b, digits, speed, 1e6) }
func BenchmarkDecodeDigitsDefault1e4(b *testing.B) { benchmarkDecode(b, digits, default_, 1e4) }
func BenchmarkDecodeDigitsDefault1e5(b *testing.B) { benchmarkDecode(b, digits, default_, 1e5) }
func BenchmarkDecodeDigitsDefault1e6(b *testing.B) { benchmarkDecode(b, digits, default_, 1e6) }
func BenchmarkDecodeDigitsCompress1e4(b *testing.B) { benchmarkDecode(b, digits, compress, 1e4) }
func BenchmarkDecodeDigitsCompress1e5(b *testing.B) { benchmarkDecode(b, digits, compress, 1e5) }
func BenchmarkDecodeDigitsCompress1e6(b *testing.B) { benchmarkDecode(b, digits, compress, 1e6) }
func BenchmarkDecodeTwainSpeed1e4(b *testing.B) { benchmarkDecode(b, twain, speed, 1e4) }
func BenchmarkDecodeTwainSpeed1e5(b *testing.B) { benchmarkDecode(b, twain, speed, 1e5) }
func BenchmarkDecodeTwainSpeed1e6(b *testing.B) { benchmarkDecode(b, twain, speed, 1e6) }
func BenchmarkDecodeTwainDefault1e4(b *testing.B) { benchmarkDecode(b, twain, default_, 1e4) }
func BenchmarkDecodeTwainDefault1e5(b *testing.B) { benchmarkDecode(b, twain, default_, 1e5) }
func BenchmarkDecodeTwainDefault1e6(b *testing.B) { benchmarkDecode(b, twain, default_, 1e6) }
func BenchmarkDecodeTwainCompress1e4(b *testing.B) { benchmarkDecode(b, twain, compress, 1e4) }
func BenchmarkDecodeTwainCompress1e5(b *testing.B) { benchmarkDecode(b, twain, compress, 1e5) }
func BenchmarkDecodeTwainCompress1e6(b *testing.B) { benchmarkDecode(b, twain, compress, 1e6) }
More importantly for this proposal, using sub-benchmarks also makes the possible
comparison axes clear: digits vs twait, speed vs default vs best, size 1e4 vs 1e5 vs 1e6.
## Rationale
As discussed in the background section,
we have already developed a number of analysis programs
that assume this proposal's format,
as well as a number of programs that generate this format.
Standardizing the format should encourage additional work
on both kinds of programs.
[Issue 12826](https://golang.org/issue/12826) suggests a different approach,
namely the addition of a new `go test` option `-benchformat`, to control
the format of benchmark output. In fact it gives the lack of standardization
as the main justification for a new option:
> Currently `go test -bench .` prints out benchmark results in a
> certain format, but there is no guarantee that this format will not
> change. Thus a tool that parses go test output may break if an
> incompatible change to the output format is made.
Our approach is instead to guarantee that the format will not change,
or rather that it will only change in ways allowed by this design.
An analysis tool that parses the output specified here will not break
in future versions of Go,
and a tool that generates the output specified here will work
with all such analysis tools.
Having one agreed-upon format enables broad interoperation;
the ability for one tool to generate arbitrarily many different formats
does not achieve the same result.
The proposed format also seems to be extensible enough to accommodate
anticipated future work on benchmark reporting.
The main known issue with the current `go test -bench` is that
we'd like to emit finer-grained detail about runs, for linearity testing
and more robust statistics (see [issue 10669](https://golang.org/issue/10669)).
This proposal allows that by simply printing more result lines.
Another known issue is that we may want to add custom outputs
such as garbage collector statistics to certain benchmark runs.
This proposal allows that by adding more value-unit pairs.
## Compatibility
Tools consuming existing benchmark format may need trivial changes
to ignore non-benchmark result lines or to cope with additional value-unit pairs
in benchmark results.
## Implementation
The benchmark format described here is already generated by `go test -bench`
and expected by tools like `benchcmp` and `benchstat`.
The format is trivial to generate, and it is
straightforward but not quite trivial to parse.
We anticipate that the [new x/perf subrepo](https://github.com/golang/go/issues/14304) will include a library for loading
benchmark data from files, although the format is also simple enough that
tools that want a different in-memory representation might reasonably
write separate parsers.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/46136-vet-std-references.md | # Proposal: check references to standard library packages inconsistent with go.mod go version
Author(s): Jay Conrod based on discussion with Daniel Martí, Paul Jolly, Roger
Peppe, Bryan Mills, and others.
Last updated: 2021-05-12
Discussion at https://golang.org/issue/46136.
## Abstract
With this proposal, `go vet` (and `go test`) would report an error if a package
imports a standard library package or references an exported standard library
definition that was introduced in a higher version of Go than the version
declared in the containing module's `go.mod` file.
This makes the meaning of the `go` directive clearer and more consistent. As
part of this proposal, we'll clarify the reference documentation and make a
recommendation to module authors about how the `go` directive should be set.
Specifically, the `go` directive should indicate the minimum version of Go that
a module supports. Authors should set their `go` version to the minimum version
of Go they're willing to support. Clients may or may not see errors when using a
lower version of Go, for example, when importing a module package that imports a
new standard library package or uses a new language feature.
## Background
The `go` directive was introduced in Go 1.12, shortly after modules were
introduced in Go 1.11.
At the time, there were several proposed language changes that seemed like they
might be backward incompatible (collectively, "Go 2"). To avoid an incompatible
split (like Python 2 and 3), we needed a way to declare the language version
used by a set of packages so that Go 1 and Go 2 packages could be mixed together
in the same build, compiled with different syntax and semantics.
We haven't yet made incompatible changes to the language, but we have made some
small compatible changes (binary literals added in 1.13). If a developer using
Go 1.12 or older attempts to build a package with a binary literal (or any other
unknown syntax), and the module containing the package declares Go 1.13 or
higher, the compiler reports an error explaining the problem. The developer also
sees an error in their own package if their `go.mod` file declares `go 1.12` or
lower.
In addition to language changes, the `go` directive has been used to opt in to
new, potentially incompatible module behavior. In Go 1.14, the `go` version was
used to enable automatic vendoring. In 1.17, the `go` version will control lazy
module loading.
One major complication is that access to standard library packages and features
has not been consistently limited. For example, a module author might use
`errors.Is` (added in 1.13) or `io/fs` (added in 1.16) while believing their
module is compatible with a lower version of Go. The author shouldn't be
expected to know this history, but they can't easily determine the lowest
version of Go their module is compatible with.
This complication has made the meaning of the `go` directive very murky.
## Proposal
We propose adding a new `go vet` analysis to report errors in packages that
reference standard library packages and definitions that aren't available
in the version of Go declared in the containing module's `go.mod` file. The
analysis will cover imports, references to exported top-level definitions
(functions, constants, etc.), and references to other exported symbols (fields,
methods).
The analysis should evaluate build constraints in source files (`// +build`
and `//go:build` comments) as if the `go` version in the containing module's
`go.mod` were the actual version of Go used to compile the package. The
analysis should not consider imports and references in files that would only
be built for higher versions of Go.
This analysis should have no false positives, so it may be enabled by default
in `go test`.
Note that both `go vet` and `go test` report findings for packages named on
the command line, but not their dependencies. `go vet all` may be used to check
packages in the main module and everything needed to build them.
The analysis would not report findings for standard library packages.
The analysis would not be enabled in GOPATH mode.
For the purpose of this proposal, modules lacking a `go` directive (including
projects without a `go.mod` file) are assumed to declare Go 1.16.
## Rationale
When writing this proposal, we also considered restrictions in the `go` command
and in the compiler.
The `go` command parses imports and applies build constraints, so it can report
an error if a package in the standard library should not be imported. However,
this may break currently non-compliant builds in a way that module authors
can't easily fix: the error may be in one of their dependencies. We could
disable errors in packages outside the main module, but we still can't easily
re-evaluate build constraints for a lower release version of Go. The `go`
command doesn't type check packages, so it can't easily detect references
to new definitions in standard library packages.
The compiler does perform type checking, but it does not evaluate build
constraints. The `go` command provides the compiler with a list of files to
compile, so the compiler doesn't even know about files excluded by build
constraints.
For these reasons, a vet analysis seems like a better, consistent way to
find these problems.
## Compatibility
The analysis in this proposal may introduce new errors in `go vet` and `go test`
for packages that reference parts of the standard library that aren't available
in the declared `go` version. Module authors can fix these errors by increasing
the `go` version, changing usage (for example, using a polyfill), or guarding
usage with build constraints.
Errors should only be reported in packages named on the command line. Developers
should not see errors in packages outside their control unless they test with
`go test all` or something similar. For those tests, authors may use `-vet=off`
or a narrower set of analyses.
We may want to add this analysis to `go vet` without immediately enabling it by
default in `go test`. While it should be safe to enable in `go test` (no false
positives), we'll need to verify this is actually the case, and we'll need
to understand how common these errors will be.
## Implementation
This proposal is targeted for Go 1.18. Ideally, it should be implemented
at the same time or before generics, since there will be a lot of language
and library changes around that time.
The Go distribution includes files in the `api/` directory that track when
packages and definitions were added to the standard library. These are used to
guard against unintended changes. They're also used in pkgsite documentation.
These files are the source of truth for this proposal. `cmd/vet` will access
these files from `GOROOT`.
The analysis can determine the `go` version for each package by walking up
the file tree and reading the `go.mod` file for the containing module. If the
package is in the module cache, the analysis will use the `.mod` file for the
module version. This file is generated by the `go` command if no `go.mod`
file was present in the original tree.
Each analysis receives a set of parsed and type checked files from `cmd/vet`.
If the proposed analysis detects that one or more source files (including
ignored files) contain build constraints with release tags (like `go1.18`),
the analysis will parse and type check the package again, applying a corrected
set of release tags. The analysis can then look for inappropriate imports
and references.
## Related issues
* [#30639](https://golang.org/issue/30639)
* https://twitter.com/mvdan_/status/1391772223158034434
* https://twitter.com/marcosnils/status/1372966993784152066
* https://twitter.com/empijei/status/1382269202380251137
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/60078-loopvar.md | # Proposal: Less Error-Prone Loop Variable Scoping
David Chase \
Russ Cox \
May 2023
Discussion at https://go.dev/issue/60078. \
Pre-proposal discussion at https://go.dev/issue/56010.
## Abstract
Last fall, we had a GitHub discussion at #56010 about changing for
loop variables declared with `:=` from one-instance-per-loop to
one-instance-per-iteration. Based on that discussion and further work
on understanding the implications of the change, we propose that we
make the change in an appropriate future Go version, perhaps Go 1.22
if the stars align, and otherwise a later version.
## Background
This proposal is about changing for loop variable scoping semantics,
so that loop variables are per-iteration instead of per-loop. This
would eliminate accidental sharing of variables between different
iterations, which happens far more than intentional sharing does. The
proposal would fix [#20733](https://go.dev/issue/20733).
Briefly, the problem is that loops like this one don’t do what they
look like they do:
var ids []*int
for i := 0; i < 10; i++ {
ids = append(ids, &i)
}
That is, this code has a bug. After this loop executes, `ids` contains
10 identical pointers, each pointing at the value 10, instead of 10
distinct pointers to 0 through 9. This happens because the item
variable is per-loop, not per-iteration: `&i` is the same on every
iteration, and `item` is overwritten on each iteration. The usual fix
is to write this instead:
var ids []*int
for i := 0; i < 10; i++ {
i := i
ids = append(ids, &i)
}
This bug also often happens in code with closures that capture the
address of item implicitly, like:
var prints []func()
for _, v := range []int{1, 2, 3} {
prints = append(prints, func() { fmt.Println(v) })
}
for _, print := range prints {
print()
}
This code prints 3, 3, 3, because all the closures print the same v,
and at the end of the loop, v is set to 3. Note that there is no
explicit &v to signal a potential problem. Again the fix is the same:
add v := v.
The same bug exists in this version of the program, with the same fix:
var prints []func()
for i := 1; i <= 3; i++ {
prints = append(prints, func() { fmt.Println(i) })
}
for _, print := range prints {
print()
}
Another common situation where this bug arises is in subtests using t.Parallel:
func TestAllEvenBuggy(t *testing.T) {
testCases := []int{1, 2, 4, 6}
for _, v := range testCases {
t.Run("sub", func(t *testing.T) {
t.Parallel()
if v&1 != 0 {
t.Fatal("odd v", v)
}
})
}
}
This test passes, because each all four subtests check that 6 (the
final test case) is even.
Goroutines are also often involved in this kind of bug, although as
these examples show, they need not be. See also the [Go FAQ
entry](https://go.dev/doc/faq#closures_and_goroutines).
Russ [talked at Gophercon once](https://go.dev/blog/toward-go2#explaining-problems)
about how we need agreement about the existence of a problem before we
move on to solutions. When we examined this issue in the run up to Go 1,
it did not seem like enough of a problem. The general consensus was
that it was annoying but not worth changing. Since then, we suspect
every Go programmer in the world has made this mistake in one program
or another.
We have talked for a long time about redefining these semantics, to
make loop variables _per-iteration_ instead of _per-loop_. That is,
the change would effectively be to add an implicit “x := x” at the
start of every loop body for each iteration variable x, just like
people do manually today. Making this change would remove the bugs
from the programs above.
This proposal does exactly that. Using the `go` version lines in
`go.mod` files, it only applies the new semantics to new programs, so
that existing programs are guaranteed to continue to execute exactly
as before.
Before writing this proposal, we collected feedback in a GitHub
Discussion in October 2022, [#56010](https://go.dev/issue/56010). The
vast majority of the feedback was positive, although a couple people
did say they see no problem with the current semantics and discourage
a change. Here are a few representative quotes from that discussion:
> One thing to notice in this discussion is that even after having this
> problem explained multiple times by different people multiple
> developers were still having trouble understanding exactly what caused
> it and how to avoid it, and even the people that understood the
> problem in one context often failed to notice it would affect other
> types of for loops.
>
> So we could argue that the current semantics make the learning curve for Go steeper.
>
> PS: I have also had problems with this multiple times, once in
> production, thus, I am very in favor of this change even considering
> the breaking aspect of it.
>
> — [@VinGarcia](https://github.com/golang/go/discussions/56010#discussioncomment-3789371)
> This exactly matches my experience. It's relatively easy to understand
> the first example (taking the same address each time), but somewhat
> trickier to understand in the closure/goroutine case. And even when
> you do understand it, one forgets (apparently even Russ forgets!). In
> addition, issues with this often don't show up right away, and then
> when debugging an issue, I find it always takes a while to realize
> that it's "that old loop variable issue again".
>
> — [@benhoyt](https://github.com/golang/go/discussions/56010#discussioncomment-3791004)
> Go's unusual loop semantics are a consistent source of problems and
> bugs in the real world. I've been a professional go developer for
> roughly six years, and I still get bit by this bug, and it's a
> consistent stumbling block for newer Go programmers. I would strongly
> encourage this change.
>
> — [@efronlicht](https://github.com/golang/go/discussions/56010#discussioncomment-3798957)
> I really do not see this as a useful change. These changes always have
> the best intentions, but the reality is that the language works just
> fine now. This well intended change slowly creep in over time, until
> you wind up with the C++ language yet again. If someone can't
> understand a relatively simple design decision like this one, they are
> not going to understand how to properly use channels and other
> language features of Go.
>
> Burying a change to the semantics of the language in go.mod is absolutely bonkers.
>
> — [@hydrogen18](https://github.com/golang/go/discussions/56010#discussioncomment-3851670)
Overall, the discussion included 72 participants and 291 total
comments and replies. As a rough measure of user sentiment, the
discussion post received 671 thumbs up, 115 party, and 198 heart emoji
reactions, and not a single thumbs down reaction.
Russ also presented the idea of making this change at GopherCon 2022,
shortly after the discussion, and then again at Google Open Source
Live's Go Day 2022. Feedback from both talks was entirely positive:
not a single person suggested that we should not make this change.
## Proposal
We propose to change for loop scoping in a future version of Go to be
per-iteration instead of per-loop. For the purposes of this document,
we are calling that version Go 1.30, but the change would land in
whatever version of Go it is ready for. The earliest version of Go
that could include the change would be Go 1.22.
This change includes four major parts:
(1) the language specification,
(2) module-based and file-based language version selection,
(3) tooling to help users in the transition,
(4) updates to other parts of the Go ecosystem.
The implementation of these parts spans the compiler, the `go` command,
the `go` `vet` command, and other tools.
### Language Specification
In <https://go.dev/ref/spec#For_clause>, the text currently reads:
> The init statement may be a short variable declaration, but the post
> statement must not. Variables declared by the init statement are
> re-used in each iteration.
This would be replaced with:
> The init statement may be a short variable declaration (`:=`), but the
> post statement must not. Each iteration has its own separate declared
> variable (or variables). The variable used by the first iteration is
> declared by the init statement. The variable used by each subsequent
> iteration is declared implicitly before executing the post statement
> and initialized to the value of the previous iteration's variable at
> that moment.
>
> var prints []func()
> for i := 0; i < 3; i++ {
> prints = append(prints, func() { println(i) })
> }
> for _, p := range prints {
> p()
> }
>
> // Output:
> // 0
> // 1
> // 2
>
> Prior to Go 1.30, iterations shared one set of variables instead of
> having their own separate variables.
(Remember that in this document, we are using Go 1.30 as the placeholder
for the release that will ship the new semantics.)
For precision in this proposal, the spec example would compile to a
form semantically equivalent to this Go program:
{
i_outer := 0
first := true
for {
i := i_outer
if first {
first = false
} else {
i++
}
if !(i < 3) {
break
}
prints = append(prints, func() { println(i) })
i_outer = i
}
}
Of course, a compiler can write the code less awkwardly, since it need
not limit the translation output to valid Go source code. In
particular a compiler is likely to have the concept of the current
memory location associated with `i` and be able to update it just
before the post statement.
In <https://go.dev/ref/spec#For_range>, the text currently reads:
> The iteration variables may be declared by the "range" clause using a
> form of short variable declaration (`:=`). In this case their types
> are set to the types of the respective iteration values and their
> scope is the block of the "for" statement; they are re-used in each
> iteration. If the iteration variables are declared outside the "for"
> statement, after execution their values will be those of the last
> iteration.
This would be replaced with:
> The iteration variables may be declared by the "range" clause using a
> form of short variable declaration (`:=`). In this case their types
> are set to the types of the respective iteration values and their
> scope is the block of the "for" statement; each iteration has its own
> separate variables. If the iteration variables are declared outside
> the "for" statement, after execution their values will be those of the
> last iteration.
>
> var prints []func()
> for _, s := range []string{"a", "b", "c"} {
> prints = append(prints, func() { println(s) })
> }
> for _, p := range prints {
> p()
> }
>
> // Output:
> // a
> // b
> // c
>
> Prior to Go 1.30, iterations shared one set of variables instead of
> having their own separate variables.
For precision in this proposal, the spec example would compile to a
form semantically equivalent to this Go program:
{
var s_outer string
for _, s_outer = range []string{"a", "b", "c"} {
s := s_outer
prints = append(prints, func() { println(s) })
}
}
Note that in both 3-clause and range forms, this proposal is a
complete no-op for loops with no `:=` in the loop header and loops
with no variable capture in the loop body. In particular, a loop like
the following example, modifying the loop variable during the loop
body, continues to execute as it always has:
for i := 0;; i++ {
if i >= len(s) || s[i] == '"' {
return s[:i]
}
if s[i] == '\\' { // skip escaped char, potentially a quote
i++
}
}
### Language Version Selection
The change in language specification will fix far more programs than
it breaks, but it may break a very small number of programs. To make
the potential breakage completely user controlled, the rollout would
decide whether to use the new semantics based on the `go` line in each
package’s `go.mod` file. This is the same line already used for
enabling language features; for example, to use generics in a package,
the `go.mod` must say `go 1.18` or later. As a special case, for this
proposal, we would use the `go` line for changing semantics instead of
for adding or removing a feature.
Modules that say `go 1.30` or later would have for loops using
per-iteration variables, while modules declaring earlier versions have
for loops with per-loop variables:
<img width="734" alt="Code in modules that say go 1.30 gets per-iteration variable semantics; code in modules that say earlier Go versions gets per-loop semantics." src="https://user-images.githubusercontent.com/104030/193599987-19d8f564-cb40-488e-beaa-5093a4823ee0.png">
This mechanism would allow the change to be [deployed
gradually](https://go.dev/talks/2016/refactor.article) in a given code
base. Each module can update to the new semantics independently,
avoiding a bifurcation of the ecosystem.
The [forward compatibility work in #57001](https://go.dev/issue/57001),
which will land in Go 1.21, ensures that Go 1.21 and later will not
attempt to compile code marked `go 1.30`. Even if this change lands in
Go 1.22, the previous (and only other supported) Go release would be
Go 1.21, which would understand not to compile `go 1.22` code. So code
opting in to the new loop semantics would never miscompile in older Go
releases, because it would not compile at all. If the changes were to
be slated for Go 1.22, it might make sense to issue a Go 1.20 point
release making its `go` command understand not to compile `go 1.22`
code. Strictly speaking, that point release is unnecessary, because if
Go 1.22 has been released, Go 1.20 is unsupported and we don't need to
worry about its behavior. But in practice people do use older Go
releases for longer than they are supported, and if they keep up with
point releases we can help them avoid this potential problem.
The forward compatibility work also allows a per-file language version
selection using `//go:build` directives. Specifically, if a file in a
`go 1.29` module says `//go:build go1.30`, it gets the Go 1.30
language semantics, and similarly if a file in a `go 1.30` module says
`//go:build go1.29`, it gets the Go 1.29 language semantics. This
general rule would apply to loop semantics as well, so the files in a
module could be converted one at a time in a per-file gradual code
repair if necessary.
Vendoring of other Go modules already records the Go version listed in
each vendored module's `go.mod` file, to implement the general
language version selection rule. That existing support would also
ensure that old vendored modules keep their old loop semantics even in
a newer overall module.
### Transition Support Tooling
We expect that this change will fix far more programs than it breaks,
but it will break some programs. The most common programs that break
are buggy tests (see the [“fixes buggy code” section below](#fixes)
for details). Users who observe a difference in their programs need
support to pinpoint the change. We plan to provide two kinds of
support tooling, one static and one dynamic.
The static support tooling is a compiler flag that reports every loop
that is compiling differently due to the new semantics. Our prototype
implementation does a very good job of filtering out loops that are
provably unaffected by the change in semantics, so in a typical
program very few loops are reported. The new compiler flag,
`-d=loopvar=2`, can be invoked by adding an option to the `go` `build`
or `go` `test` command line: either `-gcflags=-d=loopvar=2` for
reports about the current package only, or `-gcflags=all=-d=loopvar=2`
for reports about all packages.
The dynamic support tooling is a new program called bisect that, with
help from the compiler, runs a test repeatedly with different sets of
loops opted in to the new semantics. By using a binary search-like
algorithm, bisect can pinpoint the exact loop or loops that, when
converted to the new semantics, cause a test failure. Once you have a
test that fails with the new semantics but passes with the old
semantics, you run:
bisect -compile=loopvar go test
We have used this dynamic tooling in a conversion of Google's internal
monorepo to the new loop semantics. The rate of test failure caused by
the change was about 1 in 8,000. Many of these tests took a long time
to run and contained complex code that we were unfamiliar with. The
bisect tool is especially important in this situation: it runs the
search while you are at lunch or doing other work, and when you return
it has printed the source file and line number of the loop that causes
the test failure when compiled with the new semantics. At that point,
it is trivial to rewrite the loop to pre-declare a per-loop variable
and no longer use `:=`, preserving the old meaning even in the new
semantics. We also found that code owners were far more likely to see
the actual problem when we could point to the specific line of code.
As noted in the [“fixes buggy code” section below](#fixes), all but
one of the test failures turned out to be a buggy test.
### Updates to the Go Ecosystem
Other parts of the Go ecosystem will need to be updated to understand
the new loop semantics.
Vet and the golang.org/x/tools/go/analysis framework are being updated
as part of #57001 to have access to the per-file language version
information. Analyses like the vet loopclosure check will need to
tailor their diagnostics based on the language version: in files using
the new semantics, there won't be `:=` loop variable problems anymore.
Other analyzers, like staticcheck and golangci-lint, may need updates
as well. We will notify the authors of those tools and work with them
to make sure they have the information they need.
## Rationale and Compatibility
In most Go design documents, Rationale and Compatibility are two
distinct sections. For this proposal, considerations of compatibility
are so fundamental that it makes sense to address them as part of the
rationale. To be completely clear: _this is a breaking change to Go_.
However, the specifics of how we plan to roll out the change follow
the spirit of the compatibility guidelines if not the “letter of the
law.”
In the [Go 2 transitions document](https://github.com/golang/proposal/blob/master/design/28221-go2-transitions.md#language-changes)
we gave the general rule that language redefinitions like what we just
described are not permitted, giving this very proposal as an example
of something that violates the general rule. We still believe that
that is the right general rule, but we have come to also believe that
the for loop variable case is strong enough to motivate a one-time
exception to that rule. Loop variables being per-loop instead of
per-iteration is the only design decision we know of in Go that makes
programs incorrect more often than it makes them correct. Since it is
the only such design decision, we do not see any plausible candidates
for additional exceptions.
The rest of this section presents the rationale and compatibility
considerations.
### A decade of experience shows the cost of the current semantics
Russ [talked at Gophercon once](https://go.dev/blog/toward-go2#explaining-problems)
about how we need agreement about the existence of a problem before we
move on to solutions. When we examined this issue in the run up to Go
1, it did not seem like enough of a problem. The general consensus was
that it was annoying but not worth changing.
Since then, we suspect every Go programmer in the world has made this
mistake in one program or another. Russ certainly has done it
repeatedly over the past decade, despite being the one who argued for
the current semantics and then implemented them. (Apologies!)
The current cures for this problem are worse than the disease.
We ran a program to process the git logs of the top 14k modules, from
about 12k git repos and looked for commits with diff hunks that were
entirely “x := x” lines being added. We found about 600 such commits.
On close inspection, approximately half of the changes were
unnecessary, done probably either at the insistence of inaccurate
static analysis, confusion about the semantics, or an abundance of
caution. Perhaps the most striking was this pair of changes from
different projects:
```
for _, informer := range c.informerMap {
+ informer := informer
go informer.Run(stopCh)
}
```
```
for _, a := range alarms {
+ a := a
go a.Monitor(b)
}
```
One of these two changes is unnecessary and the other is a real bug
fix, but you can’t tell which is which without more context. (In one,
the loop variable is an interface value, and copying it has no effect;
in the other, the loop variable is a struct, and the method takes a
pointer receiver, so copying it ensures that the receiver is a
different pointer on each iteration.)
And then there are changes like this one, which is unnecessary
regardless of context (there is no opportunity for hidden
address-taking):
```
for _, scheme := range artifact.Schemes {
+ scheme := scheme
Runtime.artifactByScheme[scheme.ID] = id
Runtime.schemesByID[scheme.ID] = scheme
}
```
This kind of confusion and ambiguity is the exact opposite of the
readability we are aiming for in Go.
People are clearly having enough trouble with the current semantics
that they choose overly conservative tools and adding “x := x” lines
by rote in situations not flagged by tools, preferring that to
debugging actual problems. This is an entirely rational choice, but it
is also an indictment of the current semantics.
We’ve also seen production problems caused in part by these semantics,
both inside Google and at other companies (for example,
[this problem at Let’s Encrypt](https://bugzilla.mozilla.org/show_bug.cgi?id=1619047)).
It seems likely that, world-wide, the current semantics have easily
cost many millions of dollars in wasted developer time and production
outages.
### Old code is unaffected, compiling exactly as before
The go lines in go.mod give us a way to guarantee that all old code is
unaffected, even in a build that also contains new code. Only when you
change your go.mod line do the packages in that module get the new
semantics, and you control that. In general this one reason is not
sufficient, as laid out in the Go 2 transitions document. But it is a
key property that contributes to the overall rationale, with all the
other reasons added in.
### Changing the semantics globally would disallow gradual code repair
As noted earlier, [gradual code repair](https://go.dev/talks/2016/refactor.article)
is an important technique for deploying any potentially breaking
change: it allows focusing on one part of the code base at a time,
instead of having to consider all of it together. The per-module
go.mod go lines and the per-file `//go:build` directives enable
gradual code repair.
Some people have suggested we simply make the change unconditionally
when using Go 1.30, instead of allowing this fine-grained selection.
Given the low impact we expect from the change, this “all at once”
approach may be viable even for sizable code bases. However, it leaves
no room for error and creates the possibility of a large problem that
cannot be broken into smaller problems. A forced global change removes
the safety net that the gradual approach provides. From an engineering
and risk reduction point of view, that seems unwise. The safer, more
gradual path is the better one.
### Changing the semantics is usually a no-op, and when it’s not, it fixes buggy code far more often than it breaks correct code {#fixes}
As mentioned above, we have recently (as of May 2023) enabled the new
loop semantics in Google's internal Go toolchain. In order to do
that, we ran all of our tests, found the specific loops that needed
not to change behavior in order to pass (using `bisect` on each newly
failing test), rewrote the specific loops not to use `:=`, and then
changed the semantics globally. For Google's internal code base, we
did make a global change, even for open-source Go libraries written
for older Go versions. One reason for the global change was pragmatic:
there is of course no code marked as “Go 1.30” in the world now, so if
not for the global change there would be no change at all. Another
reason was that we wanted to find out how much total work it would
require to change all code. The process was still gradual, in the
sense that we tested the entirety of Google's Go code many times with
a compiler flag enabling the change just for our own builds, and fixed
all broken code, before we made the global change that affected all
our users.
People who want to experiment with a global change in their code bases
can build with `GOEXPERIMENT=loopvar` using the current development
copy of Go. That experimental mode will also ship in the Go 1.21
release.
The vast majority of newly failing tests were table-driven tests using
[t.Parallel](https://pkg.go.dev/testing/#T.Parallel). The usual
pattern is to have a test that reduces to something like `TestAllEvenBuggy`
from the start of the document:
```
func TestAllEven(t *testing.T) {
testCases := []int{1, 2, 4, 6}
for _, v := range testCases {
t.Run("sub", func(t *testing.T) {
t.Parallel()
if v&1 != 0 {
t.Fatal("odd v", v)
}
})
}
}
```
This test aims to check that all the test cases are even (they are
not!), but it passes with current Go toolchains. The problem is that
`t.Parallel` stops the closure and lets the loop continue, and then it
runs all the closures in parallel when ‘TestAllEven’ returns. By the
time the if statement in the closure executes, the loop is done, and v
has its final iteration value, 6. All four subtests now continue
executing in parallel, and they all check that 6 is even, instead of
checking each of the test cases. There is no race in this code,
because `t.Parallel` orders all the `v&1` tests after the final update
to `v` during the range loop, so the test passes even using `go test
-race`. Of course, real-world examples are typically much more
complex.
Another common form of this bug is preparing test case data by
building slices of pointers. For example this code, similar to an example at
the start of the document, builds a `[]*int32` for use as a repeated int32
in a protocol buffer:
```
func GenerateTestIDs() {
var ids []*int32
for i := int32(0); i < 10; i++ {
ids = append(ids, &i)
}
}
```
This loop aims to create a slice of ten different pointers to the
values 0 through 9, but instead it creates a slice of ten of the same
pointer, each pointing to 10.
For any of these loops, there are two useful rewrites. The first is to
remove the use of `:=`. For example:
```
func TestAllEven(t *testing.T) {
testCases := []int{1, 2, 4, 6}
var v int // TODO: Likely loop scoping bug!
for _, v = range testCases {
t.Run("sub", func(t *testing.T) {
t.Parallel()
if v&1 != 0 {
t.Fatal("odd v", v)
}
})
}
}
```
or
```
func GenerateTestIDs() {
var ids []*int32
var i int32 // TODO: Likely loop scoping bug!
for i = int32(0); i < 10; i++ {
ids = append(ids, &i)
}
}
```
This kind of rewrite keeps tests passing even if compiled using the
proposed loop semantics. Of course, most of the time the tests are
passing incorrectly; this just preserves the status quo.
The other useful rewrite is to add an explicit `x := x` assignment, as
discussed in the [Go FAQ](https://go.dev/doc/faq#closures_and_goroutines).
For example:
```
func TestAllEven(t *testing.T) {
testCases := []int{1, 2, 4, 6}
for _, v := range testCases {
v := v // TODO: This makes the test fail. Why?
t.Run("sub", func(t *testing.T) {
t.Parallel()
if v&1 != 0 {
t.Fatal("odd v", v)
}
})
}
}
```
or
```
func GenerateTestIDs() {
var ids []*int32
for i := int32(0); i < 10; i++ {
i := i // TODO: This makes the test fail. Why?
ids = append(ids, &i)
}
}
```
This kind of rewrite makes the test break using the current loop
semantics, and they will stay broken if compiled with the proposed
loop semantics. This rewrite is most useful for sending to the owners
of the code for further debugging.
Out of all the failing tests, only one affected loop was not in
test code. That code looked like:
```
var original *mapping
for _, saved := range savedMappings {
if saved.start <= m.start && m.end <= saved.end {
original = &saved
break
}
}
...
```
Unfortunately, this code was in a very low-level support program that
is invoked when a program is crashing, and a test checks that the code
contains no allocations or even runtime write barriers. In the old
loop semantics, both `original` and `saved` were function-scoped
variables, so the assignment `original = &saved` does not cause
`saved` to escape to the heap. In the new loop semantics, `saved` is
per-iteration, so `original = &saved` makes it escape the iteration
and therefore require heap allocation. The test failed because the
code is disallowed from allocating, yet it was now allocating. The fix
was to do the first kind of rewrite, declaring `saved` before the loop
and moving it back to function scope.
Similar code might change from allocating one variable per loop to
allocating N variables per loop. In some cases, that extra allocation
is inherent to fixing a latent bug. For example, `GenerateTestIDs`
above is now allocating 10 int32s instead of one – the price of
correctness. In a very frequently executed already-correct loop, the
new allocations may be unnecessary and could potentially cause more
garbage collector pressure and a measurable performance difference. If
so, standard monitoring and allocation profiles (`pprof
--alloc_objects`) should pinpoint the location easily, and the fix is
trivial: declare the variable above the loop. Benchmarking of the
public “bent” bench suite showed no statistically significant
performance difference over all, so we expect most programs to be
unaffected.
Not all the failing tests used code as obvious as the examples above.
One failing test that didn't use t.Parallel reduced to:
```
var once sync.Once
for _, tt := range testCases {
once.Do(func() {
http.HandleFunc("/handler", func(w http.ResponseWriter, r *http.Request) {
w.Write(tt.Body)
})
})
result := get("/handler")
if result != string(tt.Body) {
...
}
}
```
This strange loop registers an HTTP handler on the first iteration and
then makes a request served by the handler on every iteration. For the
handler to serve the expected data, the `tt` captured by the handler
closure must be the same as the `tt` for the current iteration. With a
per-loop `tt`, that's true. With a per-iteration `tt`, it's not: the
handler keeps using the first iteration's `tt` even in later
iterations, causing the failure.
As difficult as that example may be to puzzle through, it is a
simplified version of the original. The bisect tool pinpointing the
exact loop was a huge help in finding the problem.
Our experience supports the claim that the new semantics fixes buggy
code far more often than it breaks correct code. The new semantics
only caused test failures in about 1 of every 8,000 test packages,
but running the updated Go 1.20 `loopclosure` vet check over our entire
code base flagged tests at a much higher rate: 1 in 400 (20 in 8,000).
The loopclosure checker has no false positives: all the reports are buggy
uses of t.Parallel in our source tree.
That is, about 5% of the flagged tests were like `TestAllEvenBuggy`;
the other 95% were like `TestAllEven`: not (yet) testing what it intended,
but a correct test of correct code even with the loop variable bug fixed.
Of course, there is always the possibility that Google’s tests may not
be representative of the overall ecosystem’s tests in various ways,
and perhaps this is one of them. But there is no indication from this
analysis of _any_ common idiom at all where per-loop semantics are
required. Also, Google's tests include tests of open source Go
libraries that we use, and there were only two failures, both reported
upstream. Finally, the git log analysis points in the same direction:
parts of the ecosystem are adopting tools with very high false
positive rates and doing what the tools say, with no apparent
problems.
To be clear, it _is_ possible to write artificial examples of code
that is “correct” with per-loop variables and “incorrect” with
per-iteration variables but these are contrived.
One example, with credit to Tim King, is a convoluted way to sum the
numbers in a slice:
```
func sum(list []int) int {
m := make(map[*int]int)
for _, x := range list {
m[&x] += x
}
for _, sum := range m {
return sum
}
return 0
}
```
In the current semantics there is only one `&x` and therefore only one
map entry. With the new semantics there are many `&x` and many map
entries, so the map does not accumulate a sum.
Another example, with credit to Matthew Dempsky, is a non-racy loop:
```
for i, x := 0, 0; i < 1; i++ {
go func() { x++ }()
}
```
This loop only executes one iteration (starts just one goroutine), and
`x` is not read or written in the loop condition or post-statement.
Therefore the one created goroutine is the only code reading or
writing `x`, making the program race-free. The rewritten semantics
would have to make a new copy of `x` for the next iteration when it
runs `i++`, and that copying of `x` would race with the `x++` in the
goroutine, making the rewritten program have a race. This example
shows that it is possible for the new semantics to introduce a race
where there was no race before. (Of course, there would be a race in
the old semantics if the loop iterated more than once.)
These examples show that it is technically possible for the
per-iteration semantics to change the correctness of existing code,
even if the examples are contrived. This is more evidence for the
gradual code repair approach.
### Changing 3-clause loops keeps all for loops consistent and fixes real-world bugs
Some people suggested only applying this change to range loops, not
three-clause for loops like `i := 0; i < n; i++`.
Adjusting the 3-clause form may seem strange to C programmers, but the
same capture problems that happen in range loops also happen in
three-clause for loops. Changing both forms eliminates that bug from
the entire language, not just one place, and it keeps the loops
consistent in their variable semantics. That consistency means that if
you change a loop from using range to using a 3-clause form or vice
versa, you only have to think about whether the iteration visits the
same items, not whether a subtle change in variable semantics will
break your code. It is also worth noting that JavaScript is using
per-iteration semantics for 3-clause for loops using let, with no
problems.
In Google's own code base, at least a few of the newly failing tests
were due to buggy 3-clause loops, like in the `GenerateTestIDs` example.
These 3-clause bugs happen less often, but they still happen at a high
enough rate to be worth fixing. The consistency arguments only add to
the case.
### Good tooling can help users identify exactly the loops that need the most scrutiny during the transition
As noted in the [transition discussion](#transition), our experience
analyzing the failures in Google’s Go tests shows that we can use
compiler instrumentation to identify loops that may be compiling
differently, because the compiler thinks the loop variables escape.
Almost all the time, this identifies a very small number of loops, and
one of those loops is right next to the failure. The automated bisect
tool removes even that small amount of manual effort.
### Static analysis is not a viable alternative
Whether a particular loop is “buggy” due to the current behavior
depends on whether the address of an iteration value is taken _and
then that pointer is used after the next iteration begins_. It is
impossible in general for analyzers to see where the pointer lands and
what will happen to it. In particular, analyzers cannot see clearly
through interface method calls or indirect function calls. Different
tools have made different approximations. Vet recognizes a few
definitely bad patterns in its `loopclosure` checker, and we added a
new pattern checking for mistakes using t.Parallel in Go 1.20. To
avoid false positives, `loopclosure` also has many false negatives.
Other checkers in the ecosystem err in the other direction. The commit
log analysis showed some checkers were producing over 90% false
positive rates in real code bases. (That is, when the checker was
added to the code base, the “corrections” submitted at the same time
were not fixing actual problems over 90% of the time in some commits.)
There is no perfect way to catch these bugs statically. Changing the
semantics, on the other hand, eliminates all the bugs.
### Mechanical rewriting to preserve old semantics is possible but mostly unnecessary churn
People have suggested writing a tool that rewrites _every_ for loop
flagged as changing by the compiler, preserving the old semantics by
removing the use of `:=`. Then a person could revert the loop changess
one at a time after careful code examination. A variant of this tool
might simply add a comment to each loop along with a `//go:build
go1.29` directive at the top of the file, leaving less for the person
to undo. This kind of tool is definitely possible to write, but our
experience with real Go code suggests that it would cause far more
churn than is necessary, since 95% of definitely buggy loops simply
became correct loops with the new semantics. The approach also assumes
that careful code examination will identify all buggy code, which in
our experience is an overly optimistic assumption. Even after bisected
test failures proved that specific loops were definitely buggy,
identifying the exact bug was quite challenging. And with the
rewriting tool, you don't even know for sure that the loop is buggy,
just that the compiler would treat it differently.
All in all, we believe that the combination of being able to generate
the compiler's report of changed positions is sufficient on the static
analysis side, along with the bisect tool to track down the source of
recognized misbehaviors. Of course, users who want a rewriting tool
can easily use the compiler's report to write one, especially if the
rewrite only adds comments and `//go:build` directives.
### Changing loop syntax entirely would cause unnecessary churn
We have talked in the past about introducing a different syntax for
loops (for example, #24282), and then giving the new syntax the new
semantics while deprecating the current syntax. Ultimately this would
cause a very significant amount of churn disproportionate to the
benefit: the vast majority of existing loops are correct and do not
need any fixes. In Google's Go source tree, rate of buggy loops was
about 1 per 20,000. It would be a truly extreme response to force an
edit of every for loop that exists today, invalidate all existing
documentation, and then have two different for loops that Go
programmers need to understand for the rest of time, all to fix 1 bad
loop out of 20,000. Changing the semantics to match what people
overwhelmingly already expect provides the same value at far less
cost. It also focuses effort on newly written code, which tends to be
buggier than old code (because the old code has been at least
partially debugged already).
### Disallowing loop variable capture would cause unnecessary churn
Some people have suggested disallowing loop variable captures
entirely, which would certainly make it impossible to write a buggy
loop. Unfortunately, that would also invalidate essentially every Go
program in existence, the vast majority of which are correct. It would
also make loop variables less capable than ordinary variables, which
would be strange. Even if this were just a temporary state, with loop
captures allowed again after the semantic change, that's a huge amount
of churn to catch the 0.005% of for loops that are buggy.
### Experience from C# supports the change
Early versions of C# had per-loop variable scoping for their
equivalent of range loops. C# 5 changed the semantics to be
per-iteration, as in this proposal. (C# 5 did not change the 3-clause
for loop form, in contrast to this proposal.)
In a comment on the GitHub discussion, [@jaredpar reported](https://github.com/golang/go/discussions/56010#discussioncomment-3788526)
on experience with C#. Quoting that comment in full:
> I work on the C# team and can offer perspective here.
>
> The C# 5 rollout unconditionally changed the `foreach` loop variable
> to be per iteration. At the time of C# 5 there was no equivalent to
> Go's putting `go 1.30` in a go.mod file so the only choice was break
> unconditionally or live with the behavior. The loop variable lifetime
> became a bit of a sore point pretty much the moment the language
> introduced lambda expressions for all the reasons you describe. As the
> language grew to leverage lambdas more through features like LINQ,
> `async`, Task Parallel Libraries, etc ... the problem got worse. It
> got so prevalent that the C# team decided the unconditional break was
> justified. It would be much easier to explain the change to the,
> hopefully, small number of customers impacted vs. continuing to
> explain the tricksy behavior to new customers.
>
> This change was not taken lightly. It had been discussed internally
> for several years, [blogs were written about it](https://ericlippert.com/2009/11/12/closing-over-the-loop-variable-considered-harmful-part-one/),
> lots of analysis of customer code, upper management buy off, etc ...
> In end though the change was rather anticlimactic. Yes it did break a
> small number of customers but it was smaller than expected. For the
> customers impacted they responded positively to our justifications and
> accepted the proposed code fixes to move forward.
>
> I'm one of the main people who does customer feedback triage as well
> as someone who helps customers migrating to newer versions of the
> compiler that stumble onto unexpected behavior changes. That gives me
> a good sense of what _pain points_ exist for tooling migration. This
> was a small blip when it was first introduced but quickly faded. Even
> as recently as a few years ago I was helping large code bases upgrade
> from C# 4. While they do hit other breaking changes we've had, they
> rarely hit this one. I'm honestly struggling to remember the last time
> I worked with a customer hitting this.
>
> It's been ~10 years since this change was taken to the language and a
> lot has changed in that time. Projects have a property `<LangVersion>`
> that serves much the same purpose as the Go version in the go.mod
> file. These days when we introduce a significant breaking change we
> tie the behavior to `<LangVersion>` when possible. That helps
> separates the concept for customers of:
>
> 1. Acquiring a new toolset. This comes when you upgrade Visual Studio
> or the .NET SDK. We want these to be _friction free_ actions so
> customers get latest bug / security fixes. This never changes
> `<LangVersion>` so breaks don't happen here.
>
> 2. Moving to a new
> language version. This is an explicit action the customer takes to
> move to a new .NET / C# version. It is understood this has some cost
> associated with it to account for breaking changes.
>
> This separation has been very successful for us and allowed us to make
> changes that would not have been possible in the past. If we were
> doing this change today we'd almost certainly tie the break to a
> `<LangVersion>` update
## Implementation
The actual semantic changes are implemented in the compiler today in
an opt-in basis and form the basis of the experimental data. The
transition support tooling also exists today.
Anyone is welcome to try the change in their own trees to help inform
their understanding of the impact of the change. Specifically:
go install golang.org/dl/gotip@latest
gotip download
GOEXPERIMENT=loopvar gotip test etc
will compile all Go code with the new per-iteration loop variables
(that is, a global change, ignoring `go.mod` settings). To add
compiler diagnostics about loops that are compiling differently:
GOEXPERIMENT=loopvar gotip build -gcflags=all=-d=loopvar=2
Omit the `all=` to limit the diagnostics to the current package. To
debug a test failure that only happens with per-iteration loop
variables enabled, use:
go install golang.org/x/tools/cmd/bisect@latest
bisect -compile=loopvar gotip test the/failing/test
Other implementation work yet to be done includes documentation,
updating vet checks like loopclosure, and coordinating with the
authors of tools like staticcheck and golangci-lint. We should also
update `go fix` to remove redundant `x := x` lines from source files
that have opted in to the new semantics.
### Google testing
As noted above, we have already enabled the new loop semantics for all
code built by Google's internal Go toolchain, with only a small number
of affected tests. We will update this section to summarize any
production problems encountered.
As of May 9, it has been almost a week since the change was enabled,
and there have been no production problems, nor any bug reports of any kind.
### Timeline and Rollout
The general rule for proposals is to avoid speculating about specific
releases that will include a change. The proposal process does not
rush to meet arbitrary deadlines: we will take the time necessary to
make the right decision and, if accepted, to land the right changes
and support. That general rule is why this proposal has been referring
to Go 1.30, as a placeholder for the release that includes the new
loop semantics.
That said, the response to the [preliminary discussion of this idea](https://go.dev/issue/56010)
was enthusiastically positive, and we have no reason to expect a
different reaction to this formal proposal. Assuming that is the case,
it could be possible to ship the change in Go 1.22. Since the
GOEXPERIMENT support will ship in Go 1.21, once the proposal is
accepted and Go 1.21 is released, it would make sense to publish a
short web page explaining the change and encouraging users to try it
in their programs, like the instructions above. If the proposal is
accepted before Go 1.21 is released, that page could be published with
the release, including a link to the page in the Go 1.21 release
notes. Whenever the instructions are published, it would also make
sense to publish a blog post highlighting the upcoming change. It will
in general be good to advertise the change in as many ways as
possible.
## Open issues (if applicable)
### Bazel language version
Bazel's Go support (`rules_go`) does not support setting the language
version on a per-package basis. It would need to be updated to do
that, with gazelle maintaining that information in generated BUILD
files (derived from the `go.mod` files).
### Performance on other benchmark suites
As noted above, there is a potential for new allocations in programs
with the new semantics, which may cause changes in performance.
Although we observed no performance changes in the “bent” benchmark
suite, it would be good to hear reports from others with their own
benchmark suites.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/48409-soft-memory-limit.md | # Proposal: Soft memory limit
Author: Michael Knyszek
Date: 15 September 2021
## Summary
This document proposes a new option for tuning the behavior of the Go garbage
collector by setting a soft memory limit on the total amount of memory that Go
uses.
This option comes in two flavors: a new `runtime/debug` function called
`SetMemoryLimit` and a `GOMEMLIMIT` environment variable.
This new option gives applications better control over their resource economy.
It empowers users to:
* Better utilize the memory that they already have,
* Confidently decrease their memory limits, knowing Go will respect them,
* Avoid unsupported forms of garbage collection tuning like heap ballasts.
## Motivation
Go famously offers a single option for tuning the Go garbage collector: `GOGC`
(or `runtime/debug.SetGCPercent`).
This option offers a direct tradeoff between CPU and memory: it directly
dictates the amount of memory overhead available to the garbage collector.
Less memory overhead means more frequent garbage collection cycles, and more CPU
spent on GC.
More memory overhead means less frequent cycles and more CPU for the
application.
This option has carried the Go project well for a long time.
However, years of experience has produced a meaningful amount of evidence
suggesting that `GOGC` is not enough.
A direct tradeoff isn't always possible, because memory is not fungible relative
to CPU resources.
Consider an application that runs as a service or a daemon.
In these cases, out-of-memory errors often arise due to transient spikes in
applications' heap sizes.
Today, Go does not respect users' memory limits.
As a result, the Go community has developed various patterns for dealing with
out-of-memory errors.
In scenarios where out-of-memory errors are acceptable (to varying degrees), Go
users choose to live with these errors, restarting their service regularly,
instead of reducing `GOGC`.
Reducing `GOGC` directly impacts users' productivity metrics, metrics whose
behavior is largely governed by steady-state behavior, not transients, so this
course of action is undesirable.
In scenarios where out-of-memory errors are unacceptable, a similar situation
occurs.
Users are unwilling to increase GOGC to achieve productivity improvements, even
though in the steady-state they may actually have that memory available to them.
They pick an overly conservative value to reduce the chance of an out-of-memory
condition in transient states, but this choice leaves productivity on the table.
This out-of-memory avoidance led to the Go community developing its own
homegrown garbage collector tuning.
The first example of such tuning is the heap ballast.
In order to increase their productivity metrics while also avoiding
out-of-memory errors, users sometimes pick a low `GOGC` value, and fool the GC
into thinking that there's a fixed amount of memory live.
This solution elegantly scales with `GOGC`: as the real live heap increases, the
impact of that fixed set decreases, and `GOGC`'s contribution to the heap size
dominates.
In effect, `GOGC` is larger when the heap is smaller, and smaller (down to its
actual value) when the heap is larger.
Unfortunately, this solution is not portable across platforms, and is not
guaranteed to continue to work as the Go runtime changes.
Furthermore, users are forced to do complex calculations and estimate runtime
overheads in order to pick a heap ballast size that aligns with their memory
limits.
The second example of such tuning is calling `runtime/debug.FreeOSMemory` at
some regular interval, forcing a garbage collection to trigger sooner, usually
to respect some memory limit.
This case is much more dangerous, because calling it too frequently can lead a
process to entirely freeze up, spending all its time on garbage collection.
Working with it takes careful consideration and experimentation to be both
effective and avoid serious repercussions.
Both of these situations, dealing with out-of-memory errors and homegrown
garbage collection tuning, have a straightforward solution that other platforms
(like Java and TCMalloc) already provide its users: a configurable memory limit,
enforced by the Go runtime.
A memory limit would give the Go runtime the information it needs to both
respect users' memory limits, and allow them to optionally use that memory
always, to cut back the cost of garbage collection.
## Non-goals
1. Accounting for and react to memory outside the Go runtime, such as:
* Co-tenant processes in a container,
* C/C++ memory, and
* Kernel memory counted against the process.
Dealing with and reacting to memory used by other processes, and even to memory
within the process governed by the semantics of a completely different
programming language, is an incredibly hard problem and often requires a
coordinated effort.
It's outside of the scope of the Go runtime to solve this problem for everyone,
but I believe the Go runtime has an important role to play in supporting these
worthwhile efforts.
1. Eliminate out-of-memory errors in 100% of cases.
Whatever policy this API adheres to is going to fail for some use-case, and
that's OK.
The policy can be tweaked and improved upon as time goes on, but it's impossible
for us to create a solution that is all things to all people without a
tremendous amount of toil for our team (by e.g. exposing lots of tuning knobs).
On top of this, any such solution is likely to become difficult to use at all.
The best we can do is make life better for as many users as possible.
## Detailed design
The design of a memory soft limit consists of four parts: an API, mechanisms to
enforce the soft limit, and guidance through thorough documentation, and
telemetry for identifying issues in production.
### API
```go
package runtime/debug
// SetMemoryLimit provides the runtime with a soft memory limit.
//
// The runtime undertakes several processes to try to respect this
// memory limit, including adjustments to the frequency of garbage
// collections and returning memory to the underlying system more
// aggressively. This limit will be respected even if GOGC=off (or,
// if SetGCPercent(-1) is executed).
//
// The input limit is provided as bytes, and is intended to include
// all memory that the Go runtime has direct control over. In other
// words, runtime.MemStats.Sys - runtime.MemStats.HeapReleased.
//
// This limit does not account for memory external to Go, such as
// memory managed by the underlying system on behalf of the process,
// or memory managed by non-Go code inside the same process.
//
// A zero limit or a limit that's lower than the amount of memory
// used by the Go runtime may cause the garbage collector to run
// nearly continuously. However, the application may still make
// progress.
//
// See https://golang.org/doc/gc-ergonomics for a detailed guide
// explaining the soft memory limit as well as a variety of common
// use-cases and scenarios.
//
// SetMemoryLimit returns the previously set memory limit.
// By default, the limit is math.MaxInt64.
// A negative input does not adjust the limit, and allows for
// retrieval of the currently set memory limit.
func SetMemoryLimit(limit int64) int64
```
Note that the soft limit is expressed in terms of the total amount of memory
used by the Go runtime.
This choice means that enforcement of the soft memory limit by the GC must
account for additional memory use such as heap metadata and fragmentation.
It also means that the runtime is responsible for any idle heap memory above the
limit, i.e. any memory that is currently unused by the Go runtime, but has not
been returned to the operating system.
As a result, the Go runtime's memory scavenger must also participate in
enforcement.
This choice is a departure from similar APIs in other languages (including the
experimental `SetMaxHeap` [patch](https://golang.org/cl/46751)), whose limits
only include space occupied by heap objects themselves.
To reduce confusion and help facilitate understanding, each class of memory that
is accounted for will be precisely listed in the documentation.
In addition, the soft memory limit can be set directly via an environment
variable that all Go programs recognize: `GOMEMLIMIT`.
For ease-of-use, I propose `GOMEMLIMIT` accept either an integer value in bytes,
or a string such as "8GiB."
More specifically, an integer followed by one of several recognized unit
strings, without spaces.
I propose supporting "B," "KiB," "MiB," "GiB," and "TiB" indicating the
power-of-two versions of each.
Similarly, I propose supporting "KB," "MB," "GB," and "TB," which refer to their
power-of-ten counterparts.
### Enforcement
#### Garbage collection
In order to ensure the runtime maintains the soft memory limit, it needs to
trigger at a point such that the total heap memory used does not exceed the soft
limit.
Because the Go garbage collector's memory use is defined entirely in terms of
the heap goal, altering its definition is sufficient to ensure that a memory
limit is enforced.
However, the heap goal is defined in terms of object bytes, while the memory
limit includes a much broader variety of memory classes, necessitating a
conversion function between the two.
To compute the heap limit ![`\hat{L}`](48409/inl1.png) from the soft memory
limit ![`L`](48409/inl2.png), I propose the following calculation:
![Equation 1](48409/eqn1.png)
![`T`](48409/inl3.png) is the total amount of memory mapped by the Go runtime.
![`F`](48409/inl4.png) is the amount of free and unscavenged memory the Go
runtime is holding.
![`A`](48409/inl5.png) is the number of bytes in allocated heap objects at the
time ![`\hat{L}`](48409/inl1.png) is computed.
The second term, ![`(T - F - A)`](48409/inl6.png), represents the sum of
non-heap overheads.
Free and unscavenged memory is specifically excluded because this is memory that
the runtime might use in the near future, and the scavenger is specifically
instructed to leave the memory up to the heap goal unscavenged.
Failing to exclude free and unscavenged memory could lead to a very poor
accounting of non-heap overheads.
With ![`\hat{L}`](48409/inl1.png) fully defined, our heap goal for cycle
![`n`](48409/inl7.png) (![`N_n`](48409/inl8.png)) is a straightforward extension
of the existing one.
Where
* ![`M_n`](48409/inl9.png) is equal to bytes marked at the end of GC n's mark
phase
* ![`S_n`](48409/inl10.png) is equal to stack bytes at the beginning of GC n's
mark phase
* ![`G_n`](48409/inl11.png) is equal to bytes of globals at the beginning of GC
n's mark phase
* ![`\gamma`](48409/inl12.png) is equal to
![`1+\frac{GOGC}{100}`](48409/inl13.png)
then
![Equation 2](48409/eqn2.png)
Over the course of a GC cycle, non-heap overheads remain stable because the
mostly increase monotonically.
However, the GC needs to be responsive to any change in non-heap overheads.
Therefore, I propose a more heavy-weight recomputation of the heap goal every
time its needed, as opposed to computing it only once per cycle.
This also means the GC trigger point needs to be dynamically recomputable.
This check will create additional overheads, but they're likely to be low, as
the GC's internal statistics are updated only on slow paths.
The nice thing about this definition of ![`\hat{L}`](48409/inl1.png) is that
it's fairly robust to changes to the Go GC, since total mapped memory, free and
unscavenged memory, and bytes allocated in objects, are fairly fundamental
properties (especially to any tracing GC design).
#### Death spirals
As the live heap grows toward ![`\hat{L}`](48409/inl1.png), the Go garbage
collector is going to stray from the tradeoff defined by `GOGC`, and will
trigger more and more often to reach its goal.
Left unchecked, the Go garbage collector will eventually run continuously, and
increase its utilization as its runway disappears.
Eventually, the application will fail to make progress.
This process is referred to as a death spiral.
One way to deal with this situation is to place a limit on the amount of total
CPU utilization of the garbage collector.
If the garbage collector were to execute and exceed that limit at any point, it
will instead let the application proceed, even if that means missing its goal
and breaching the memory limit.
I propose we do exactly this, but rather than provide another knob for
determining the maximum fraction of CPU time, I believe that we should simply
pick a reasonable default based on `GOGC`.
I propose that we pick 50% as the default fraction.
This fraction is reasonable and conservative since most applications won't come
close to this threshold in normal execution.
To implement this policy, I propose a leaky bucket mechanism inspired by a tool
called `jvmquake` developed by [Netflix for killing Java service
instances](https://netflixtechblog.medium.com/introducing-jvmquake-ec944c60ba70)
that could fall into a death spiral.
To summarize, the mechanism consists of a conceptual bucket with a capacity, and
that bucket accumulates GC CPU time.
At the same time, the bucket is drained by mutator CPU time.
Should the ratio of GC CPU time to mutator CPU time exceed 1:1 for some time
(determined by the bucket's capacity) then the bucket's contents will tend
toward infinity.
At the point in which the bucket's contents exceed its capacity, `jvmquake`
would kill the target service instance.
In this case instead of killing the process, the garbage collector will
deliberately prevent user goroutines from assisting the garbage collector in
order to prevent the bucket from overflowing.
The purpose of the bucket is to allow brief spikes in GC CPU utilization.
Otherwise, anomalous situations could cause unnecessary missed assists that make
GC assist pacing less smooth.
A reasonable bucket capacity will have to be chosen empirically, as it should be
large enough to accommodate worst-case pause times but not too large such that a
100% GC CPU utilization spike could cause the program to become unresponsive for
more than about a second.
1 CPU-second per `GOMAXPROCS` seems like a reasonable place to start.
Unfortunately, 50% is not always a reasonable choice for small values of `GOGC`.
Consider an application running with `GOGC=10`: an overall 50% GC CPU
utilization limit for `GOGC=10` is likely going to be always active, leading to
significant overshoot.
This high utilization is due to the fact that the Go GC at `GOGC=10` will reach
the point at which it may no longer start a GC much sooner than, say `GOGC=100`.
At that point, the GC has no option but to increase utilization to meet its
goal.
Because it will then be capped at increasing utilization, the GC will have no
choice but to use more memory and overshoot.
As a result, this effectively creates a minimum `GOGC` value: below a certain
`GOGC`, the runtime will be effectively acting as if the `GOGC` value was
higher.
For now, I consider this acceptable.
#### Returning memory to the platform
In the context of maintaining a memory limit, it's critical that the Go runtime
return memory to the underlying platform as a part of that process.
Today, the Go runtime returns memory to the system with a background goroutine
called the scavenger, which paces itself to consume around 1% of 1 CPU.
This pacing is conservative, but necessarily so: the scavenger must synchronize
with any goroutine allocating pages of memory from the heap, so this pacing is
generally a slight underestimate as it fails to include synchronization
overheads from any concurrent allocating goroutines.
Currently, the scavenger's goal is to return free memory to the platform until
it reaches the heap goal, accounting for page-level fragmentation and a fixed
10% overhead to avoid paging costs.
In the context of a memory limit, I propose that the scavenger's goal becomes
that limit.
Then, the scavenger should pace itself more aggressively as the runtime's memory
use approaches the limit.
I propose it does so using a proportional-integral controller whose input is the
difference between the memory limit and the memory used by Go, and whose output
is the CPU utilization target of the background scavenger.
This will make the background scavenger more reliable.
However, the background scavenger likely won't return memory to the OS promptly
enough for the memory limit, so in addition, I propose having span allocations
eagerly return memory to the OS to stay under the limit.
The time a goroutine spends in this will also count toward the 50% GC CPU limit
described in the [Death spirals](#death-spirals) section.
#### Alternative approaches considered
##### Enforcement
The conversion function from the memory limit to the heap limit described in
this section is the result of an impedance mismatch between how the GC pacer
views memory and how the memory limit is defined (i.e. how the platform views
memory).
An alternative approach would be to resolve this impedance mismatch.
One way to do so would be to define the memory limit in terms of heap object
bytes.
As discussed in the [API section](#api) however, this makes for a poor user
experience.
Another way is to redefine the GC pacer's view of memory to include other memory
sources.
Let's focus on the most significant of these: fragmentation.
Suppose we redefined the heap goal and the garbage collector's pacing to be
based on the spans containing objects, rather than the objects themselves.
This definition is straightforward to implement: marked memory is defined as the
sum total of memory used spans containing marked objects, and the heap is
considered to grow each time a fresh span is allocated.
Unfortunately, this redefinition comes with two major caveats that make it very
risky.
The first is that the definition of the GC steady-state, upon which much of the
pacer's intuition is built, now also depends on the degree of fragmentation,
making it a less fragile state in practice.
The second is that most heaps will have an inflated size.
Consider a situation where we start with a very dense heap.
After some time, most of the objects die, but there's still at least one object
alive in each span that previously contained a live object.
With the redefinition, the overall heap will grow to the same size despite much
less memory being alive.
In contrast, the existing definition will cause the heap to grow only only to a
multiple of the actual live objects memory, and it's very unlikely that it will
go beyond the spans already in-use.
##### Returning memory to the platform
If returning memory to the OS eagerly becomes a significant performance issue, a
reasonable alternative could be to crank up the background scavenger's CPU usage
in response to growing memory pressure.
This needs more thought, but given that it would now be controlled by a
controller, its CPU usage will be more reliable, and this is an option we can
keep in mind.
One benefit of this option is that it may impact latency less prominently.
### Documentation
Alongside this new feature I plan to create a new document in the doc directory
of the Go repository entitled "Go GC Ergonomics."
The purpose of this document is four-fold:
* Provide Go users with a high-level, but accurate mental and visual model of
how the Go GC and scavenger behave with varying GOGC and GOMEMLIMIT settings.
* Address common use-cases and provide advice, to promote good practice for each
setting.
* Break down how Go accounts for memory in excruciating detail.
Often memory-related documentation is frustratingly imprecise, making every
user's job much more difficult.
* Describe how to identify and diagnose issues related to the GC through runtime
metrics.
While Hyrum's Law guarantees that the API will be used in unexpected ways, at
least a central and complete living document will exist to help users better
understand what it is that their code is doing.
### Telemetry
Identifying issues with the garbage collector becomes even more important with
new ways to interact with it.
While the Go runtime already exposes metrics that could aid in identifying
issues, these metrics are insufficient to create a complete diagnosis,
especially in light of the new API.
To further assist users in diagnosing issues related to the API (be that misuse
or bugs in the implementation) and the garbage collector in general, I propose
the addition of three new metrics to the [runtime/metrics
package](https://pkg.go.dev/runtime/metrics):
* `/gc/throttles:events`: a monotonically increasing count of leaky bucket
overflows.
* Direct indicator of the application entering a death spiral with the soft
memory limit enabled.
* `/gc/cycles-by-utilization:percent`: histogram of GC cycles by GC CPU
utilization.
* Replaces the very misleading runtime.MemStats.GCCPUFraction
* `/gc/scavenge/cycles-by-utilization:percent`: histogram of scavenger
utilization.
* Since the scavenging rate can now change, identifying possible issues
there will be critical.
## Prior art
### Java
Nearly every Java garbage collector operates with a heap limit by default.
As a result, the heap limit is not a special mode of operation for memory
limits, but rather the status quo.
The limit is typically configured by passing the `-Xmx` flag to the Java runtime
executable.
Note that this is a heap limit, not a memory limit, and so only counts heap
objects.
The OpenJDK runtime operates with a default value of ¼ of available memory or 1
GiB, whichever is lesser.
Generally speaking, Java runtimes often only return memory to the OS when it
decides to shrink the heap space used; more recent implementations (e.g. G1) do
so more rarely, except when [the application is
idle](https://openjdk.java.net/jeps/346).
Some JVMs are "container aware" and read the memory limits of their containers
to stay under the limit.
This behavior is closer to what is proposed in this document, but I do not
believe the memory limit is directly configurable, like the one proposed here.
### SetMaxHeap
For nearly 4 years, the Go project has been trialing an experimental API in the
`runtime/debug` package called `SetMaxHeap`.
The API is available as a patch in Gerrit and has been used for some time within
Google.
The API proposed in this document builds on top of the work done on
`SetMaxHeap`.
Some notable details about `SetMaxHeap`:
* Its limit is defined solely in terms of heap object bytes, like Java.
* It does not alter the scavenging policy of the Go runtime at all.
* It accepts an extra channel argument that provides GC pressure notifications.
* This channel is ignored in most uses I'm aware of.
* One exception is where it is used to log GC behavior for telemetry.
* It attempts to limit death spirals by placing a minimum on the runway the GC
has.
* This minimum was set to 10%.
Lessons learned from `SetMaxHeap`:
* Backpressure notifications are unnecessarily complex for many use-cases.
* Figuring out a good heap limit is tricky and leads to conservative limits.
* Without a memory return policy, its usefulness for OOM avoidance is limited.
### TCMalloc
TCMalloc provides a `SetMemoryLimit` function to set a [soft memory
limit](https://github.com/google/tcmalloc/blob/cb5aa92545ded39f75115f3b2cc2ffd66a17d55b/tcmalloc/malloc_extension.h#L306).
Because dynamic memory allocation is provided in C and C++ as a library,
TCMalloc's `SetMemoryLimit` can only be aware of its own overheads, but notably
it does include all sources of fragmentation and metadata in its calculation.
Furthermore, it maintains a policy of eagerly returning memory to the platform
if an allocation would cause TCMalloc's memory use to exceed the specified
limit.
### Go 1 compatibility
This change adds an API to the Go project and does not alter existing ones.
Therefore, the proposed changes are Go 1 backwards compatible.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/36460-lazy-module-loading.md | # Proposal: Lazy Module Loading
Author: Bryan C. Mills (with substantial input from Russ Cox, Jay Conrod, and
Michael Matloob)
Last updated: 2020-02-20
Discussion at https://golang.org/issue/36460.
## Abstract
We propose to change `cmd/go` to avoid loading transitive module dependencies
that have no observable effect on the packages to be built.
The key insights that lead to this approach are:
1. If _no_ package in a given dependency module is ever (even transitively)
imported by any package loaded by an invocation of the `go` command, then an
incompatibility between any package in that dependency and any other package
has no observable effect in the resulting program(s). Therefore, we can
safely ignore the (transitive) requirements of any module that does not
contribute any package to the build.
2. We can use the explicit requirements of the main module as a coarse filter
on the set of modules relevant to the main module and to previous
invocations of the `go` command.
Based on those insights, we propose to change the `go` command to retain more
transitive dependencies in `go.mod` files and to avoid loading `go.mod` files
for “irrelevant” modules, while still maintaining high reproducibility for build
and test operations.
## Background
In the initial implementation of modules, we attempted to make `go mod tidy`
prune out of the `go.mod` file any module that did not provide a transitive
import of the main module. However, that did not always preserve the remaining
build list: a module that provided no packages might still raise the minimum
requirement on some _other_ module that _did_ provide a package.
We addressed that problem in [CL 121304] by explicitly retaining requirements on
all modules that provide _directly-imported_ packages, _as well as_ a minimal
set of module requirement roots needed to retain the selected versions of
transitively-imported packages.
In [#29773] and [#31248], we realized that, due to the fact that the `go.mod`
file is pruned to remove indirect dependencies already implied by other
requirements, we must load the `go.mod` file for all versions of dependencies,
even if we know that they will not be selected — even including the main module
itself!
In [#30831] and [#34016], we learned that following deep history makes
problematic dependencies very difficult to completely eliminate. If the
repository containing a module is no longer available and the module is not
cached in a module mirror, then we will encounter an error when loading any
module — even a very old, irrelevant one! — that required it.
In [#26904], [#32058], [#33370], and [#34417], we found that the need to
consider every version of a module separately, rather than only the selected
version, makes the `replace` directive difficult to understand, difficult to use
correctly, and generally more complex than we would like it to be.
In addition, users have repeatedly expressed the desire to avoid the cognitive
overhead of seeing “irrelevant” transitive dependencies ([#26955], [#27900],
[#32380]), reasoning about older-than-selected transitive dependencies
([#36369]), and fetching large numbers of `go.mod` files ([#33669], [#29935]).
### Properties
In this proposal, we aim to achieve a property that we call <dfn>lazy
loading</dfn>:
* In the steady state, an invocation of the `go` command should not load any
`go.mod` file or source code for a module (other than the main module) that
provides no _packages_ loaded by that invocation.
* In particular, if the selected version of a module is not changed by a
`go` command, the `go` command should not load a `go.mod` file or source
code for any _other_ version of that module.
We also want to preserve <dfn>reproducibility</dfn> of `go` command invocations:
* An invocation of the `go` command should either load the same version of
each package as every other invocation since the last edit to the `go.mod`
file, or should edit the `go.mod` file in a way that causes the next
invocation on any subset of the same packages to use the same versions.
## Proposal
### Invariants
We propose that, when the main module's `go.mod` file specifies `go 1.15` or
higher, every invocation of the `go` command should update the `go.mod` file to
maintain three invariants.
1. (The <dfn>import invariant</dfn>.) The main module's `go.mod` file
explicitly requires the selected version of every module that contains one
or more packages that were transitively imported by any package in the main
module.
2. (The <dfn>argument invariant<dfn>.) The main module's `go.mod` file
explicitly requires the selected version of every module that contains one
or more packages that matched an explicit [package pattern] argument.
3. (The <dfn>completeness invariant</dfn>.) The version of every module that
contributed any package to the build is recorded in the `go.mod` file of
either the main module itself or one of modules it requires explicitly.
The _completeness invariant_ alone is sufficient to ensure _reproducibility_ and
_lazy loading_. However, it is under-constrained: there are potentially many
_minimal_ sets of requirements that satisfy the completeness invariant, and even
more _valid_ solutions. The _import_ and _argument_ invariants guide us toward a
_specific_ solution that is simple and intuitive to explain in terms of the `go`
commands invoked by the user.
If the main module satisfies the _import_ and _argument_ invariants, and all
explicit module dependencies also satisfy the import invariant, then the
_completeness_ invariant is also trivially satisfied. Given those, the
completeness invariant exists only in order to tolerate _incomplete_
dependencies.
If the import invariant or argument invariant holds at the start of a `go`
invocation, we can trivially preserve that invariant (without loading any
additional packages or modules) at the end of the invocation by updating the
`go.mod` file with explicit versions for all module paths that were already
present, in addition to any new main-module imports or package arguments found
during the invocation.
### Module loading procedure
At the start of each operation, we load all of the explicit requirements from
the main module's `go.mod` file.
If we encounter an import from any module that is not already _explicitly_
required by the main module, we perform a <dfn>deepening scan</dfn>. To perform
a deepening scan, we read the `go.mod` file for each module explicitly required
by the main module, and add its requirements to the build list. If any
explicitly-required module uses `go 1.14` or earlier, we also read the `go.mod`
files for all of that module's (transitive) module dependencies.
(The deepening scan allows us to detect changes to the import graph without
loading the whole graph explicitly: if we encounter a new import from within a
previously-irrelevant package, the deepening scan will re-read the requirements
of the module containing that package, and will ensure that the selected version
of that import is compatible with all other relevant packages.)
As we load each imported package, we also read the `go.mod` file for the module
containing that package and add its requirements to the build list — even if
that version of the module was already explicitly required by the main module.
(This step is theoretically redundant: the requirements of the main module will
already reflect any relevant dependencies, and the _deepening scan_ will catch
any previously-irrelevant dependencies that subsequently _become_ relevant.
However, reading the `go.mod` file for each imported package makes the `go`
command much more robust to inconsistencies in the `go.mod` file — including
manual edits, erroneous version-control merge resolutions, incomplete
dependencies, and changes in `replace` directives and replacement directory
contents.)
If, after the _deepening scan,_ the package to be imported is still not found in
any module in the build list, we resolve the `latest` version of a module
containing that package and add it to the build list (following the same search
procedure as in Go 1.14), then perform another deepening scan (this time
including the newly added-module) to ensure consistency.
### The `all` pattern and `mod` subcommands
#### In Go 1.11–1.14
In module mode in Go 1.11–1.14, the `all` package pattern matches each package
reachable by following imports _and tests of imported packages_ recursively,
starting from the packages in the main module. (It is equivalent to the set of
packages obtained by iterating `go list -deps -test ./...` over its own output
until it reaches a fixed point.)
`go mod tidy` adjusts the `go.mod` and `go.sum` files so that the main module
transitively requires a set of modules that provide every package matching the
`all` package pattern, independent of build tags. After `go mod tidy`, every
package matching the `all` _package_ pattern is provided by some module matching
the `all` _module_ pattern.
`go mod tidy` also updates a set of `// indirect` comments indicating versions
added or upgraded beyond what is implied by transitive dependencies.
`go mod download` downloads all modules matching the `all` _module_ pattern,
which normally includes a module providing every package in the `all` _package_
pattern.
In contrast, `go mod vendor` copies in only the subset of packages transitively
_imported by_ the packages and tests _in the main module_: it does not scan the
imports of tests outside of the main module, even if those tests are for
imported packages. (That is: `go mod vendor` only covers the packages directly
reported by `go list -deps -test ./...`.)
As a result, when using `-mod=vendor` the `all` and `...` patterns may match
substantially fewer packages than when using `-mod=mod` (the default) or
`-mod=readonly`.
<!-- Note: the behavior of `go mod vendor` was changed to its current form
during the `vgo` prototype, in https://golang.org/cl/122256. -->
#### The `all` package pattern and `go mod tidy`
We would like to preserve the property that, after `go mod tidy`, invocations of
the `go` command — including `go test` — are _reproducible_ (without changing
the `go.mod` file) for every package matching the `all` package pattern. The
_completeness invariant_ is what ensures reproducibility, so `go mod tidy` must
ensure that it holds.
Unfortunately, even if the _import invariant_ holds for all of the dependencies
of the main module, the current definition of the `all` pattern includes
_dependencies of tests of dependencies_, recursively. In order to establish the
_completeness invariant_ for distant test-of-test dependencies, `go mod tidy`
would sometimes need to record a substantial number of dependencies of tests
found outside of the main module in the main module's `go.mod` file.
Fortunately, we can omit those distant dependencies a different way: by changing
the definition of the `all` pattern itself, so that test-of-test dependencies
are no longer included. Feedback from users (in [#29935], [#26955], [#32380],
[#32419], [#33669], and perhaps others) has consistently favored omitting those
dependencies, and narrowing the `all` pattern would also establish a nice _new_
property: after running `go mod vendor`, the `all` package pattern with
`-mod=vendor` would now match the `all` pattern with `-mod=mod`.
Taking those considerations into account, we propose that the `all` package
pattern in module mode should match only the packages transitively _imported by_
packages and tests in the main module: that is, exactly the set of packages
preserved by `go mod vendor`. Since the `all` pattern is based on package
imports (more-or-less independent of module dependencies), this change should be
independent of the `go` version specified in the `go.mod` file.
The behavior of `go mod tidy` should change depending on the `go` version. In a
module that specifies `go 1.15` or later, `go mod tidy` should scan the packages
matching the new definition of `all`, ignoring build tags. In a module that
specifies `go 1.14` or earlier, it should continue to scan the packages matching
the _old_ definition (still ignoring build tags). (Note that both of those sets
are supersets of the new `all` pattern.)
#### The `all` and `...` module patterns and `go mod download`
In Go 1.11–1.14, the `all` module pattern matches each module reachable by
following module requirements recursively, starting with the main module and
visiting every version of every module encountered. The module pattern `...` has
the same behavior.
The `all` module pattern is important primarily because it is the default set of
modules downloaded by the `go mod download` subcommand, which sets up the local
cache for offline use. However, it (along with `...`) is also currently used by
a few other tools (such as `go doc`) to locate “modules of interest” for other
purposes.
Unfortunately, these patterns as defined in Go 1.11–1.14 are _not compatible
with lazy loading:_ they examine transitive `go.mod` files without loading any
packages. Therefore, in order to achieve lazy loading we must change their
behavior.
Since we want to compute the list of modules without loading any packages or
irrelevant `go.mod` files, we propose that when the main module's `go.mod` file
specifies `go 1.15` or higher, the `all` and wildcard module patterns should
match only those modules found in a _deepening scan_ of the main module's
dependencies. That definition includes every module whose version is
reproducible due to the _completeness invariant,_ including modules needed by
tests of transitive imports.
With this redefinition of the `all` module pattern, and the above redefinition
of the `all` package pattern, we again have the property that, after `go mod
tidy && go mod download all`, invoking `go test` on any package within `all`
does not need to download any new dependencies.
Since the `all` pattern includes every module encountered in the deepening scan,
rather than only those that provide imported packages, `go mod download` may
continue to download more source code than is strictly necessary to build the
packages in `all`. However, as is the case today, users may download only that
narrower set as a side effect of invoking `go list all`.
### Effect on `go.mod` size
Under this approach, the set of modules recorded in the `go.mod` file would in
most cases increase beyond the set recorded in Go 1.14. However, the set of
modules recorded in the `go.sum` file would decrease: irrelevant modules would
no longer be included.
- The modules recorded in `go.mod` under this proposal would be a strict
subset of the set of modules recorded in `go.sum` in Go 1.14.
- The set of recorded modules would more closely resemble a “lock” file as
used in other dependency-management systems. (However, the `go` command
would still not require a separate “manifest” file, and unlike a lock
file, the `go.mod` file would still be updated automatically to reflect
new requirements discovered during package loading.)
- For modules with _few_ test-of-test dependencies, the `go.mod` file after
running `go mod tidy` will typically be larger than in Go 1.14. For modules
with _many_ test-of-test dependencies, it may be substantially smaller.
- For modules that are _tidy:_
- The module versions recorded in the `go.mod` file would be exactly those
listed in `vendor/modules.txt`, if present.
- The module versions recorded in `vendor/modules.txt` would be the same
as under Go 1.14, although the `## explicit` annotations could perhaps
be removed (because _all_ relevant dependencies would be recorded
explicitly).
- The module versions recorded in the `go.sum` file would be exactly those
listed in the `go.mod` file.
## Compatibility
The `go.mod` file syntax and semantics proposed here are backward compatible
with previous Go releases: all `go.mod` files for existing `go` versions would
retain their current meaning.
Under this proposal, a `go.mod` file that specifies `go 1.15` or higher will
cause the `go` command to lazily load the `go.mod` files for its requirements.
When reading a `go 1.15` file, previous versions of the `go` command (which do
not prune irrelevant dependencies) may select _higher_ versions than those
selected under this proposal, by following otherwise-irrelevant dependency
edges. However, because the `require` directive continues to specify a minimum
version for the required dependency, a previous version of the `go` command will
never select a _lower_ version of any dependency.
Moreover, any strategy that prunes out a dependency as interpreted by a previous
`go` version will continue to prune out that dependency as interpreted under
this proposal: module maintainers will not be forced to break users on new `go`
versions in order to support users on older versions (or vice-versa).
Versions of the `go` command before 1.14 do not preserve the proposed invariants
for the main module: if `go` command from before 1.14 is run in a `go 1.15`
module, it may automatically remove requirements that are now needed. However,
as a result of [CL 204878], `go` version 1.14 does preserve those invariants in
all subcommands except for `go mod tidy`: Go 1.14 users will be able to work (in
a limited fashion) within a Go 1.15 main module without disrupting its
invariants.
## Implementation
`bcmills` is working on a prototype of this design for `cmd/go` in Go 1.15.
At this time, we do not believe that any other tooling changes will be needed.
## Open issues
Because `go mod tidy` will now preserve seemingly-redundant requirements, we may
find that we want to expand or update the `// indirect` comments that it
currently manages. For example, we may want to indicate “indirect dependencies
at implied versions” separately from “upgraded or potentially-unused indirect
dependencies”, and we may want to indicate “direct or indirect dependencies of
tests” separately from “direct or indirect dependencies of non-tests”.
Since these comments do not have a semantic effect, we can fine-tune them after
implementation (based on user feedback) without breaking existing modules.
## Examples
The following examples illustrate the proposed behavior using the `cmd/go`
[script test] format. For local testing and exploration, the test files can be
extracted using the [`txtar`] tool.
### Importing a new package from an existing module dependency
```txtar
cp go.mod go.mod.old
go mod tidy
cmp go.mod go.mod.old
# Before adding a new import, the go.mod file should
# enumerate modules for all packages already imported.
go list all
cmp go.mod go.mod.old
# When a new import is found, we should perform a deepening scan of the existing
# dependencies and add a requirement on the version required by those
# dependencies — not re-resolve 'latest'.
cp lazy.go.new lazy.go
go list all
cmp go.mod go.mod.new
-- go.mod --
module example.com/lazy
go 1.15
require (
example.com/a v0.1.0
example.com/b v0.1.0 // indirect
)
replace (
example.com/a v0.1.0 => ./a
example.com/b v0.1.0 => ./b
example.com/c v0.1.0 => ./c1
example.com/c v0.2.0 => ./c2
)
-- lazy.go --
package lazy
import (
_ "example.com/a/x"
)
-- lazy.go.new --
package lazy
import (
_ "example.com/a/x"
_ "example.com/a/y"
)
-- go.mod.new --
module example.com/lazy
go 1.15
require (
example.com/a v0.1.0
example.com/b v0.1.0 // indirect
example.com/c v0.1.0 // indirect
)
replace (
example.com/a v0.1.0 => ./a
example.com/b v0.1.0 => ./b
example.com/c v0.1.0 => ./c1
example.com/c v0.2.0 => ./c2
)
-- a/go.mod --
module example.com/a
go 1.15
require (
example.com/b v0.1.0
example.com/c v0.1.0
)
-- a/x/x.go --
package x
import _ "example.com/b"
-- a/y/y.go --
package y
import _ "example.com/c"
-- b/go.mod --
module example.com/b
go 1.15
-- b/b.go --
package b
-- c1/go.mod --
module example.com/c
go 1.15
-- c1/c.go --
package c
-- c2/go.mod --
module example.com/c
go 1.15
-- c2/c.go --
package c
```
### Testing an imported package found in another module
```txtar
cp go.mod go.mod.old
go mod tidy
cmp go.mod go.mod.old
# 'go list -m all' should include modules that cover the test dependencies of
# the packages imported by the main module, found via a deepening scan.
go list -m all
stdout 'example.com/b v0.1.0'
! stdout example.com/c
cmp go.mod go.mod.old
# 'go test' of any package in 'all' should use its existing dependencies without
# updating the go.mod file.
go list all
stdout example.com/a/x
go test example.com/a/x
cmp go.mod go.mod.old
-- go.mod --
module example.com/lazy
go 1.15
require example.com/a v0.1.0
replace (
example.com/a v0.1.0 => ./a
example.com/b v0.1.0 => ./b1
example.com/b v0.2.0 => ./b2
example.com/c v0.1.0 => ./c
)
-- lazy.go --
package lazy
import (
_ "example.com/a/x"
)
-- a/go.mod --
module example.com/a
go 1.15
require example.com/b v0.1.0
-- a/x/x.go --
package x
-- a/x/x_test.go --
package x
import (
"testing"
_ "example.com/b"
)
func TestUsingB(t *testing.T) {
// …
}
-- b1/go.mod --
module example.com/b
go 1.15
require example.com/c v0.1.0
-- b1/b.go --
package b
-- b1/b_test.go --
package b
import _ "example.com/c"
-- b2/go.mod --
module example.com/b
go 1.15
require example.com/c v0.1.0
-- b2/b.go --
package b
-- b2/b_test.go --
package b
import _ "example.com/c"
-- c/go.mod --
module example.com/c
go 1.15
-- c/c.go --
package c
```
### Testing an unimported package found in an existing module dependency
```txtar
cp go.mod go.mod.old
go mod tidy
cmp go.mod go.mod.old
# 'go list -m all' should include modules that cover the test dependencies of
# the packages imported by the main module, found via a deepening scan.
go list -m all
stdout 'example.com/b v0.1.0'
cmp go.mod go.mod.old
# 'go test all' should use those existing dependencies without updating the
# go.mod file.
go test all
cmp go.mod go.mod.old
-- go.mod --
module example.com/lazy
go 1.15
require (
example.com/a v0.1.0
)
replace (
example.com/a v0.1.0 => ./a
example.com/b v0.1.0 => ./b1
example.com/b v0.2.0 => ./b2
example.com/c v0.1.0 => ./c
)
-- lazy.go --
package lazy
import (
_ "example.com/a/x"
)
-- a/go.mod --
module example.com/a
go 1.15
require (
example.com/b v0.1.0
)
-- a/x/x.go --
package x
-- a/x/x_test.go --
package x
import _ "example.com/b"
func TestUsingB(t *testing.T) {
// …
}
-- b1/go.mod --
module example.com/b
go 1.15
-- b1/b.go --
package b
-- b1/b_test.go --
package b
import _ "example.com/c"
-- b2/go.mod --
module example.com/b
go 1.15
require (
example.com/c v0.1.0
)
-- b2/b.go --
package b
-- b2/b_test.go --
package b
import _ "example.com/c"
-- c/go.mod --
module example.com/c
go 1.15
-- c/c.go --
package c
```
### Testing a package imported from a `go 1.14` dependency
```txtar
cp go.mod go.mod.old
go mod tidy
cmp go.mod go.mod.old
# 'go list -m all' should include modules that cover the test dependencies of
# the packages imported by the main module, found via a deepening scan.
go list -m all
stdout 'example.com/b v0.1.0'
stdout 'example.com/c v0.1.0'
cmp go.mod go.mod.old
# 'go test' of any package in 'all' should use its existing dependencies without
# updating the go.mod file.
#
# In order to satisfy reproducibility for the loaded packages, the deepening
# scan must follow the transitive module dependencies of 'go 1.14' modules.
go list all
stdout example.com/a/x
go test example.com/a/x
cmp go.mod go.mod.old
-- go.mod --
module example.com/lazy
go 1.15
require example.com/a v0.1.0
replace (
example.com/a v0.1.0 => ./a
example.com/b v0.1.0 => ./b
example.com/c v0.1.0 => ./c1
example.com/c v0.2.0 => ./c2
)
-- lazy.go --
package lazy
import (
_ "example.com/a/x"
)
-- a/go.mod --
module example.com/a
go 1.14
require example.com/b v0.1.0
-- a/x/x.go --
package x
-- a/x/x_test.go --
package x
import (
"testing"
_ "example.com/b"
)
func TestUsingB(t *testing.T) {
// …
}
-- b/go.mod --
module example.com/b
go 1.14
require example.com/c v0.1.0
-- b/b.go --
package b
import _ "example.com/c"
-- c1/go.mod --
module example.com/c
go 1.14
-- c1/c.go --
package c
-- c2/go.mod --
module example.com/c
go 1.14
-- c2/c.go --
package c
```
<!-- References -->
[package pattern]: https://tip.golang.org/cmd/go/#hdr-Package_lists_and_patterns
"go — Package lists and patterns"
[script test]: https://go.googlesource.com/go/+/refs/heads/master/src/cmd/go/testdata/script/README
"src/cmd/go/testdata/script/README"
[`txtar`]: https://pkg.go.dev/golang.org/x/exp/cmd/txtar
"golang.org/x/exp/cmd/txtar"
[CL 121304]: https://golang.org/cl/121304
"cmd/go/internal/vgo: track directly-used vs indirectly-used modules"
[CL 122256]: https://golang.org/cl/122256
"cmd/go/internal/modcmd: drop test sources and data from mod -vendor"
[CL 204878]: https://golang.org/cl/204878
"cmd/go: make commands other than 'tidy' prune go.mod less aggressively"
[#26904]: https://golang.org/issue/26904
"cmd/go: allow replacement modules to alias other active modules"
[#27900]: https://golang.org/issue/27900
"cmd/go: 'go mod why' should have an answer for every module in 'go list -m all'"
[#29773]: https://golang.org/issue/29773
"cmd/go: 'go list -m' fails to follow dependencies through older versions of the main module"
[#29935]: https://golang.org/issue/29935
"x/build: reconsider the large number of third-party dependencies"
[#26955]: https://golang.org/issue/26955
"cmd/go: provide straightforward way to see non-test dependencies"
[#30831]: https://golang.org/issue/30831
"cmd/go: 'get -u' stumbles over repos imported via non-canonical paths"
[#31248]: https://golang.org/issue/31248
"cmd/go: mod tidy removes lines that build seems to need"
[#32058]: https://golang.org/issue/32058
"cmd/go: replace directives are not thoroughly documented"
[#32380]: https://golang.org/issue/32380
"cmd/go: don't add dependencies of external tests"
[#32419]: https://golang.org/issue/32419
"proposal: cmd/go: conditional/optional dependency for go mod"
[#33370]: https://golang.org/issue/33370
"cmd/go: treat pseudo-version 'vX.0.0-00010101000000-000000000000' as equivalent to an empty commit"
[#33669]: https://golang.org/issue/33669
"cmd/go: fetching dependencies can be very aggressive when going via an HTTP proxy"
[#34016]: https://golang.org/issue/34016
"cmd/go: 'go list -m all' hangs for git.apache.org"
[#34417]: https://golang.org/issue/34417
"cmd/go: do not allow the main module to replace (to or from) itself"
[#34822]: https://golang.org/issue/34822
"cmd/go: do not update 'go.mod' automatically if the changes are only cosmetic"
[#36369]: https://golang.org/issue/36369
"cmd/go: dependencies in go.mod of older versions of modules in require cycles affect the current version's build"
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go15bootstrap.md | # Go 1.5 Bootstrap Plan
Russ Cox \
January 2015 \
golang.org/s/go15bootstrap \
([comments on golang-dev](https://groups.google.com/d/msg/golang-dev/3bTIOleL8Ik/D8gICLOiUJEJ))
## Abstract
Go 1.5 will use a toolchain written in Go (at least in part). \
Question: how do you build Go if you need Go built already? \
Answer: building Go 1.5 will require having Go 1.4 available.
[**Update, 2023.** This plan was originally published as a Google document. For easier access, it was converted to Markdown in this repository in 2023. Later versions of Go require newer bootstrap toolchains. See [go.dev/issue/52465](https://go.dev/issue/52465) for those details.]
## Background
We have been planning for a year now to eliminate all C programs from the Go source tree. The C compilers (5c, 6c, 8c, 9c) have already been removed. The remaining C programs will be converted to Go: they are the Go compilers ([golang.org/s/go13compiler](https://go.dev/s/go13compiler)), the assemblers, the linkers ([golang.org/s/go13linker](https://go.dev/s/go13linker)), and cmd/dist. If these programs are written in Go, that introduces a bootstrapping problem when building completely from source code: you need a working Go toolchain in order to build a Go toolchain.
## Proposal
To build Go 1.x, for x ≥ 5, it will be necessary to have Go 1.4 (or newer) installed already, in $GOROOT_BOOTSTRAP. The default value of $GOROOT_BOOTSTRAP is $HOME/go1.4. In general we'll keep using Go 1.4 as the bootstrap base version for as long as possible. The toolchain proper (compiler, assemblers, linkers) will need to be buildable with Go 1.4, whether by restricting their feature use to what is in Go 1.4 or by using build tags.
For comparison with what will follow, the old build process for Go 1.4 is:
1. Build cmd/dist with gcc (or clang).
2. Using dist, build compiler toolchain with gcc (or clang)
3. NOP
4. Using dist, build cmd/go (as go_bootstrap) with compiler toolchain.
5. Using go_bootstrap, build the remaining standard library and commands.
The new build process for Go 1.x (x ≥ 5) will be:
1. Build cmd/dist with Go 1.4.
2. Using dist, build Go 1.x compiler toolchain with Go 1.4.
3. Using dist, rebuild Go 1.x compiler toolchain with itself.
4. Using dist, build Go 1.x cmd/go (as go_bootstrap) with Go 1.x compiler toolchain.
5. Using go_bootstrap, build the remaining Go 1.x standard library and commands.
There are two changes.
The first change is that we replace gcc (or clang) with Go 1.4.
The second change is the introduction of step 3, which rebuilds the Go 1.x compiler toolchain with itself. The 6g built in Step 2 is a Go 1.x compiler built using Go 1.4 libraries and compilers. The 6g built in Step 3 is the same Go 1.x compiler, but built using Go 1.x libraries and compilers. If Go 1.x has changed the format of debug info or some other detail of the binaries, it may matter to tools whether 6g is a Go 1.4 binary or a Go 1.x binary. If Go 1.x has introduced any performance or stability improvements in the libraries, the compiler in Step 3 will be faster or more stable than the compiler in Step 2. Of course, if Go 1.x is buggier, the 6g built in Step 3 will also be buggier, so it will be possible to disable step 3 for debugging.
Step 3 could make make.bash take longer. As an upper bound on the slowdown, the current build process steps 1-4 take 20 seconds on my MacBook Pro, out of the total 40 seconds required for make.bash. In the new process, I can’t see step 3 adding more than 50% to the make.bash run time, and I expect it would be significantly less than that. On the other hand, the C compilations being replaced are very I/O heavy; two Go compilations might still be faster, especially on I/O-constrained ARM devices. In any event, if make.bash does slow down, I will speed up run.bash at least as much, so that all.bash time does not increase.
## New Ports
Bootstrapping makes new ports a little more complex. It was possible in the past to check out the Go tree on a new system and run all.bash to build the toolchain (and it would fail, and you’d make some edits, and try again). Now, it will not be possible to run all.bash until that system is fully supported by Go.
For Go 1.x (x ≥ 5), new ports will have to be done by cross-compiling test binaries on a working system, copying the binaries over to the target, and running and debugging them there. This is already well-supported by all.bash via the go\_$GOOS\_$GOARCH\_exec scripts (see ‘go help run’). Once all.bash can be run in that mode, the resulting compilers and libraries can be copied to the target system and used directly.
Once a port works well enough that the compilers and linkers can run on the target machine, the script bootstrap.bash (run on an old system) will prepare a GOROOT_BOOTSTRAP directory for use on the new system.
## Deployment
Today we are still using the Go 1.4 build process above.
The first step in the transition will be to convert cmd/dist itself to Go and change make.bash to use Go 1.4 to build cmd/dist. That replaces “gcc (or clang)” with “Go 1.4” in step 1 of the build and changes nothing else. This will mainly exercise the integration of Go 1.4 into the build.
After that first step, we can convert the remaining C programs in whatever order makes sense. Each conversion will require minor modifications to cmd/dist to build the Go version instead of the C version. I am not sure whether the new linker or the new assemblers will be converted first. I expect the Go compiler to be converted last.
We will probably do the larger conversions on the dev.cc branch and merge into master at good checkpoints, so that multiple people can work on the conversion (coordinated via Git) but able to break certain builds for long amounts of time without affecting other developers. This is similar to what we did for dev.cc and dev.garbage in 2014.
Go 1.5 will require Go 1.4 to build. The goal is to convert all the C programs—the Go compiler, the linker, the assemblers, and cmd/dist—for Go 1.5. We may not reach that goal, but certainly some of that list will be converted.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/55022-pgo-implementation.md | # Proposal: Design and Implementation of Profile-Guided Optimization (PGO) for Go
Author(s): Raj Barik, Jin Lin
Last updated: 2022-09-12
Discussion at https://golang.org/issue/55025. \
Linked high-level issue for PGO at https://golang.org/issue/55022.
## Abstract
Inefficiencies in Go programs can be isolated via profiling tools such as [pprof](https://pkg.go.dev/runtime/pprof) and linux profiler [perf](https://www.brendangregg.com/perf.html).
Such tools can pinpoint source code regions where most of the execution time is spent.
Unlike other optimizing compilers such as [LLVM](https://llvm.org/docs/HowToBuildWithPGO.html), the Go compiler does not yet perform Profile-Guided Optimization(PGO).
PGO uses information about the code’s runtime behavior to guide compiler optimizations such as inlining, code layout etc.
PGO can improve application performance in the range 15-30% [[LLVM](https://llvm.org/docs/HowToBuildWithPGO.html), [AutoFDO](https://dl.acm.org/doi/pdf/10.1145/2854038.2854044)].
In this proposal, we extend the Go compiler with PGO.
Specifically, we incorporate the profiles into the frontend of the compiler to build a call graph with node & edge weights (called _WeightedCallGraph_).
The Inliner subsequently uses the WeightedCallGraph to perform _profile-guided inlining_ which aggressively inlines hot functions.
We introduce a _profile-guided code specialization_ pass that is tightly integrated with the Inliner and eliminates indirect method call overheads in hot code paths.
Furthermore, we annotate IR instructions with their associated profile weights and propagate these to the SSA-level in order to facilitate _profile-guided basic-block layout_ optimization to benefit from better instruction-cache and TLB performance.
Finally, we extend Go's linker to also consume the profiles directly and perform _function reordering_ optimization across package boundaries -- which also helps instruction-cache and TLB performance.
The format of the profile file consumed by our PGO is identical to the protobuf format produced by the [pprof](https://pkg.go.dev/runtime/pprof) tool.
This format is rich enough to carry additional hardware performance counter information such as cache misses, LBR, etc.
Existing [perf\_data\_converter](https://github.com/google/perf_data_converter) tool from Google can convert a _perf.data_ file produced by the Linux [perf](https://www.brendangregg.com/perf.html) into a _profile.proto_ file in protobuf format.
The first version of the code that performs _profile-guided inlining_ is available [here](http://github.com/rajbarik/go).
## Background
Many compiler optimizations, such as inlining, register allocation, & instruction scheduling often use hard-coded information related to caller-callee frequencies, basic-block frequencies, and branch probabilities to guide optimization.
The static estimation of these metrics often leads to poor quality code generated by the compiler.
These optimizations can easily benefit from dynamic information collected via profiling an application.
Traditionally, a PGO-based compilation begins with an instrumentation phase to generate an instrumented version of the application.
Next, the instrumented program runs with training data to collect the profile (e.g., [edge-profiles](https://dl.acm.org/doi/10.1145/183432.183527)).
These profiles are subsequently fed to the compiler and the application is recompiled to produce an optimized binary.
During this process, the compiler updates and propagates profile information including feeding them to optimization passes to optimize hot/code paths.
Modern compilers such as LLVM have embedded PGO in them and have reported [speed ups](https://llvm.org/docs/HowToBuildWithPGO.html) of ~20\%.
Since the instrumentation build of an application incurs significant overhead, [recent work](https://dl.acm.org/doi/10.1145/1772954.1772963) has demonstrated little or no performance loss by collecting execution profiles via sampling (e.g., [hardware performance counter](https://dl.acm.org/doi/10.1145/1772954.1772963), [pprof](https://pkg.go.dev/runtime/pprof)).
Go binaries are often large as they are statically linked and include all dependent packages and runtimes.
For such large binaries, misses in instruction cache and TLB can cause stalled cycles in the front-end leading to performance degradation.
Profile-guided _code layout_ optimization is known to alleviate this problem.
Recent works including [FB-BOLT](https://research.fb.com/publications/bolt-a-practical-binary-optimizer-for-data-centers-and-beyond/) and [Google-Propeller](https://github.com/google/llvm-propeller) have shown more than10% performance improvements by optimizing code locality in large data-center workloads.
_Code layout optimization_ improves code locality and it comprises basic-block layout, function splitting, and function reordering optimizations.
In order to extract maximum performance, these optimizations are typically performed during or post link-time using profiling information.
### Standard Compilation flow in Go
![](55022/image1.png)
**Figure 1.** Go compiler.
At a very high-level, Go compiler produces an object file per package.
It links all object-files together to produce an executable.
It avoids unnecessary recompilation of packages which are not modified.
For each package, the frontend of the Go compiler performs passes such as type checking and ast-lowering before generating the frontend-IR.
At this IR-level, the Go compiler performs optimizations such as inlining, devirtualization, and escape analysis before lowering to SSA-IR.
Several optimizations (dead-code, cse, dead-store, basic-block layout, etc.) are performed at the SSA-level before producing an object file for a package.
Inlining optimization in Go compiler proceeds as a bottom-up traversal of the call graph.
For each function, it first invokes _CanInline_ to determine the eligibility for inlining, e.g., functions marked with _go:noinline_ can not be inlined.
CanInline traverses the IR (via a HairyVisitor) to calculate the _Cost_ of the function.
If this Cost is greater than _maxInlineBudget_ (80), CanInline marks this function not inlinable in its upstream callers.
Subsequently, Inliner performs inlining in the same function for call-sites that have direct callees with cost less than maxInlineBudget (80).
As an additional code size optimization, the Inliner lowers the budget from maxInlineBudget (80) to inlineBigFunctionMaxCost (20) for a big function whose number of IR instructions is more than 5K, These parameters have been tuned heavily to get a good balance between code size and performance.
On the other hand, there have been several discussions around maxInlineBudget (80) being too small which prevents frequently executed methods not being inlined, e.g., NextSet in [bitset](https://lemire.me/blog/2017/09/05/go-does-not-inline-functions-when-it-should/).
It is definitely compelling to be able to tune these parameters based on runtime behavior of an application.
Devirtualization pass in the Go compiler replaces interface method calls with direct concrete-type method calls where possible.
Since Inlining happens before Devirtualization, there is no guarantee that the concrete method calls generated by the devirtualizer gets inlined in the compiler.
Go compiler performs basic-block layout at the SSA-level today, but it is not profile-guided.
This optimization is more effective when runtime information is provided to the compiler.
Go linker does not yet perform any function reordering optimization.
To the best of our knowledge, the linker today creates two sections, one for data and another for text/code.
Ideally, one would like to do basic-block layout, hot-cold splitting, & function reordering optimizations in the linker ([FB-BOLT](https://research.fb.com/publications/bolt-a-practical-binary-optimizer-for-data-centers-and-beyond/))], however without the support for sections at function and basic-block level it is hard to do them in the linker.
This limitation has led us to enable the basic-block layout optimization at SSA-level and function reordering optimization in the linker.
## Proposal
### New compilation flow proposed in Go for PGO
![](55022/image2.png)
**Figure 2.** New PGO-enabled Go compiler.
Our proposed Go compilation flow consists of the following high-level components.
* __Pprof-graph__: First, we process the profile proto file that is generated either via the [pprof](https://github.com/google/pprof/tree/main/profile) tool or the perf+[perf\_data\_converter](https://github.com/google/perf_data_converter) tool to produce an inter-package _pprof-graph._ We use the existing _internal/profile_ package to get this graph straight out-of-the-box.
This graph contains most of the information needed to perform PGO, e..g, function names, flat weights, cumulative weights, call-sites with callee information for both direct and indirect callees etc.
One thing to note is that ideally we would like to generate this graph once and cache it for individual package-level compilation in order to keep compile-time under control, however since packages could be compiled in parallel in multiple processes we could not find an easy way to share this graph across packages.
We fall-back to building this graph for every package.
* __CallGraphBuilder and WeightedCallGraph__: While compiling each package, we visit the IRs of all the functions in the package to produce an intra-package call graph with edge and node weights.
The edge and node weights are obtained from the _pprof-graph_.
We term this as _WeightedCallGraph_(shown as _CallGraphBuilder_ in Figure 2).
In other words, _WeightedCallGraph_ is a succinct version of package-level pprof-graph with additional information such as associated IR function bodies, call edges linked with IR call sites, multiple callee edges for indirect calls etc.
* __Embed profiles in the frontend IR & SSA IR__: While building the WeightedCallgraph, we also create a map table that stores the IR and its corresponding profiled weights in order to facilitate the basic-block layout optimization at the SSA-level.
The primary reason for storing every IR instruction with weights is that there is no control-flow graph available at the IR-level, which could have been leveraged to just annotate branch instructions and branch-edges.
During the lowering from the frontend IR to SSA IR, the profiling information is annotated into the basic blocks and edges.
The compiler propagates the profiling information in the control flow graph.
Similarly, every optimization based on the SSA IR maintains the profiling information.
Annotating every single instruction with weights can increase memory pressure; in future we plan on exploring other options.
* __Profile-guided Inlining__: During inlining, we determine hot functions and hot call-sites based on the _WeightedCallgraph_.
We increase the inlining budget from 80 to a larger value (default 160, tunable via command-line) based on the hotness of the function.
This enables a function to be inlined in its upstream callers even if its budget is more than 80.
Moreover if the function has a cost more than 80 and has more than one upstream callers, then only hot-call edges will be inlined, others will not, thereby controlling code size growth.
Similarly, hot call-sites are also inlined aggressively in "mkinlcall" function using the increased budget.
The hot-ness criteria is based on a threshold, which can be tuned via command-line (default set to 2% of the total execution time).
One subtle note is that the total number of inlined functions via PGO may either be greater or smaller than the original inliner without PGO.
For example, in a pathological case if a hot function gets bulkier (i.e., its budget is more than 160) due to inlining performed in transitive callees, then it may not get inlined in upstream callers (unlike standard inlining)] and, if there are a large number of upstream callers, then it may lead to less number of inlining outcomes.
This can be addressed by increasing the budget via command-line, but that may also lead to increased code size, something to keep an eye out for.
* __Code Specialization__: In order to eliminate indirect function call overheads, we introduce a _code specialization_ step inside the Inliner to transform an indirect callsite into an "if-else" code block based on the hot receiver.
The WeightedCallgraph is used to determine a hot receiver.
The "if" condition introduces a check on the runtime type of the hot receiver.
The body of the "if" block converts the indirect function call into a direct function call.
This call subsequently gets inlined since Inlining follows this optimization.
The "else" block retains the original slow indirect call.
As mentioned earlier, our goal is to be able to inline hot receivers.
We purposefully avoid making multiple checks on the receiver type for multiple hot receiver cases in order to avoid unnecessary code growth and performance degradation in corner cases.
In certain cases, it is possible to hoist the "if" condition check outside of the surrounding loop nests; this is a subject for future work.
During Inlining and Specialization, we update and propagate weights of the IR instructions.
* __Basic-block Reordering__: Since the Go compiler produces a large monolithic binary, we implement state-of-the-art profile-guided basic block layout optimization, i.e., [EXT-TSP heuristic](https://ieeexplore.ieee.org/document/9050435), at the SSA-level.
The algorithm is a greedy heuristic that works with chains (ordered lists) of basic blocks.
Initially all chains are isolated basic blocks.
On every iteration, a pair of chains are merged that yields the biggest benefit in terms of instruction-cache reuse.
The algorithm stops when one chain is left or merging does not improve the overall benefit.
If there are more than one chain left, these chains are sorted using a density function that prioritizes merging frequently executed smaller sized chains into each page.
Unlike earlier algorithms (e.g., [Pettis-Hansen](https://dl.acm.org/doi/10.1145/93548.93550)), in ExtTSP algorithm two chains, X and Y, are first split into three hypothetical chains, X1, X2, and Y.
Then it considers all possible ways of combining these three chains (e.g., X1YX2, X1X2Y, X2X1Y, X2YX1, YX1X2, YX2X1) and chooses the one producing the largest benefit.
We also have an implementation of [Pettis-Hansen](https://dl.acm.org/doi/10.1145/93548.93550) available for performance comparisons.
* __Function Reorderings__: Finally, we introduce a "function reordering" optimization in the linker.
Linker code is extended to rebuild the cross-package "pprof-graph" via a command-line argument.
It first produces a summary table consisting of a set of (caller, callee, edge-weight) entries [similar to WeightedCallGraph without node weights].
We then use the [C3 heuristic](https://dl.acm.org/doi/10.5555/3049832.3049858) to re-layout the functions in the text section.
The structure of the C3 algorithm is similar to the ExtTSP and we avoid describing it again.
One thing to note is that the C3 heuristic places a function as close as possible to its most common caller following a priority from the hottest to the coldest functions in the program.
One issue we ran into while implementing this algorithm is that C3 heuristic requires the relative distance of a callsite from the start of a function.
We found this tricky to implement in the linker, so our current implementation approximates this distance as half of the code size of the function instead of computing it precisely.
We are able to compute code size directly from the object file in order to compute density.
In future, we should explore options of computing this information in the compiler and store it in a fixed location in the object file.
We also have an implementation of [Pettis-Hansen](https://dl.acm.org/doi/10.1145/93548.93550) available as an alternative.
Our function reordering optimization is automatically cross-package.
One subtle issue we have run into is that often GC-related functions are intermingled with application code in profiles making function reordering optimization sub-optimal.
While GC functions emanating from the "assistGC" function called from the "malloc" routine should be included in the function reordering optimization, other GC functions probably should be sequenced as a separate chain/cluster of its own.
This makes sense given that GC gets invoked at arbitrary timelines and runs in a separate thread of its own (perhaps could run in a separate core as well).
This is a subject for future work for us.
* __Stale Profile__: As source code evolves, it is highly likely that the profiles could become stale over time.
In order to deal with this, we propose to use _\<pkg\_name, function\_name, line\_offset\>_ instead of _\<pkg\_name, function\_name, line\_number\>_.
With line\_offset information, we can identify if a call site has moved up or down relative to the start of the function.
In this case, we do not inline this call site, however other call sites that have not moved up or down will continue to use the profile information and get optimized.
This design also allows functions that are unchanged will continue to be optimized via PGO.
This design is similar to [AutoFDO](https://dl.acm.org/doi/pdf/10.1145/2854038.2854044)].
## Rationale
There are several design choices we have made in this proposal.
__Why not inline every hot callee? Why introduce a budget for hot callee?__
Primary motivation is to control code growth while retaining good performance.
Auto-tuning frameworks such as [OpenTuner](https://github.com/minshuzhan/opentuner) can tune binaries using the command-line options for budget and threshold.
__What is the benefit of WeightedCallGraph data structure?__
The core of WeightedCallGraph is to associate IR functions and callsites with node and edge weights from pprof-graph.
This could have been achieved via a simpler data structure perhaps and thus, WeightedCallGraph could be a bit heavy-weight data structure just for Inlining and Specialization.
However, it is a forward looking design where other (interprocedural) optimizations can leverage this graph in future.
__Why Specialization is performed during inlining and not before or after?__
Ideally, we want code specialization to be performed before inlining so that the direct callees can be inlined.
On the other hand, when specialization is performed as a separate pass before inliner, one needs to update the WeightedCallGraph during specialization.
When we perform specialization along with Inliner, we avoid updating WeightedCallgraph twice.
Moreover, Specialization also requires parts of CanInline logic to perform legality checks.
Keeping them together also avoids this code duplication.
__Why use the Protobuf format? Why not use other formats similar to LLVM?__
From our experience, protobuf format is rich enough to carry additional profiling information, particularly hardware performance counter information.
There exist tools such as [perf\_data\_converter](https://github.com/google/perf_data_converter) tool that can convert a _perf.data_ file produced by Linux [perf](https://www.brendangregg.com/perf.html) into a _profile.proto_ file in protobuf format.
So, we may not lose on functionality.
__Why do we not perform basic-block layout in the linker?__
Ideally, we should.
Currently, the Go compiler produces one section for code/text.
If it can be extended to produce sections at the basic-block level, we can port our basic-block layout implementation to the linker.
## Compatibility
Currently our optimizations have been tested in the Linux OS only.
To the best of our knowledge, our implementation follows the guidelines described in the [compatibility guidelines](https://golang.org/doc/go1compat).
## Implementation
We have implemented all the optimizations described above in this proposal, i.e., Inlining, Code Specialization, Basic-block layout, & Function reordering.
However, pushing all these changes upstream at once can take a while.
Our strategy is to release only "CallGraphBuilder" and "Profile-guided Inlining" (see Figure 2) in the first phase with the timeline for v1.20 release.
Below we provide the implementation details for CallGraphBuilder and Profile-guided inlining.
### CallGraphBuilder implementation
* __BuildPProfGraph__: This function builds the pprof-graph based on the "internal/profile" package.
The pprof-graph is built using the _cpuprofile _data from_ _profile.
The protobuf profile file is fed to the compiler via a command line flag (_-profileuse_).
It preprocesses the pprof-graph to compute a few additional information such as _TotalNodeWeights_ and _TotalEdgeWeights_.
It also creates a global node-to-weight map which is cheaper to look up during IR traversal in later phases.
* __BuildWeightedCallGraph__: This function builds the WeightedCallGraph data structure by visiting all the _ir.Func_ s in the decl-list of a package.
During the traversal of the IR, it is necessary to determine the callee function for a direct callsite.
Our logic to do this is very similar to "inlCallee" in "inline/inl.go".
Since we would create a cyclic dependency between inline and pgo packages if we reused the code, we have duplicated "inlCallee" logic in irgraph.go (with the same name).
We should refactor this code in future to be able to put it outside of these two packages.
* __RedirectEdges__: This function updates the WeightedCallGraph nodes and edges during and after Inlining based on a list of inlined functions.
### Profile-guided Inlining implementation
* __Prologue__: Before performing inlining, we walk over the _WeightedCallGraph_ to identify hot functions and hot call-sites based on a command-line tunable threshold (_-inlinehotthreshold_).
This threshold is expressed as a percentage of time exclusively spent in a function or call-site over total execution time.
It is by default set to 2% and can be updated via the command-line argument.
When a function or call-site exclusively spends more time than this threshold percentage, we classify the function or call-site as a hot function or hot call-site.
* __Extensions to CanInline__: Currently the HairyVisitor bails out as soon as its budget is more than maxInlineBudget (80).
We introduce a new budget for profile-guided inlining, _inlineHotMaxBudget_, with a default value of 160, which can also be updated via a command line flag (_-inlinehotbudget_).
We use the inlineHotMaxBudget for hot functions during the CanInline step.
```go
if v.budget < 0 {
if pgo.WeightedCG != nil {
if n, ok := pgo.WeightedCG.IRNodes[ir.PkgFuncName(fn)]; ok {
if inlineMaxBudget-v.budget < inlineHotMaxBudget && n.HotNode == true {
return false
}
}
}
...
}
```
During the HairyVisitor traversal, we also track all the hot call-sites of a function.
```go
// Determine if callee edge is a hot callee or not.
if pgo.WeightedCG != nil && ir.CurFunc != nil {
if fn := inlCallee(n.X); fn != nil && typecheck.HaveInlineBody(fn) {
lineno := fmt.Sprintf("%v", ir.Line(n))
splits := strings.Split(lineno, ":")
l, _ := strconv.ParseInt(splits[len(splits)-2], 0, 64)
linenum := fmt.Sprintf("%v", l)
canonicalName := ir.PkgFuncName(ir.CurFunc) + "-" + linenum + "-" + ir.PkgFuncName(fn)
if _, o := candHotEdgeMap[canonicalName]; o {
listOfHotCallSites[pgo.CallSiteInfo{ir.Line(n), ir.CurFunc}] = struct{}{}
}
}
}
```
* __Extension to mkinlcall__: When a hot callee's budget is more than maxCost, we check if this cost is less than inlineHotMaxBudget in order to permit inlining of hot callees.
```go
if fn.Inl.Cost > maxCost {
...
// If the callsite is hot and it is below the inlineHotCalleeMaxBudget budget, then inline it, or else bail.
if _, ok := listOfHotCallSites[pgo.CallSiteInfo{ir.Line(n), ir.CurFunc}]; ok {
if fn.Inl.Cost > inlineHotMaxBudget {
return n
}
} else {
return n
}
}
```
* __Epilogue__: During this phase, we update the WeightedCallGraph based on the inlining decisions made earlier.
```go
ir.VisitFuncsBottomUp(typecheck.Target.Decls, func(list []*ir.Func, recursive bool) {
for _, f := range list {
name := ir.PkgFuncName(f)
if n, ok := pgo.WeightedCG.IRNodes[name]; ok {
pgo.RedirectEdges(n, inlinedCallSites)
}
}
})
```
* __Command line flags list__: We introduce the following flag in base/flag.go.
```go
PGOProfile string "help:\"read profile from `file`\""
```
We also introduced following debug flags in base/debug.go.
```go
InlineHotThreshold string "help:\"threshold percentage for determining hot methods and callsites for inlining\""
InlineHotBudget int "help:\"inline budget for hot methods\""
```
## Open issues (if applicable)
* Build the pprof-graph once across all packages instead of building it over and over again for each package.
Currently the Go compiler builds packages in parallel using multiple processes making it hard to do this once.
* Enable separate sections for basic-blocks and functions in the linker.
This would enable basic-block reordering optimization at the linker level.
Our implementation performs this optimization at the SSA-level.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/13504-natural-xml.md | # Proposal: Natural XML
Author(s): Sam Whited <sam@samwhited.com>
Last updated: 2016-09-27
Discussion at https://golang.org/issue/13504.
## Abstract
The `encoding/xml` API is arguably difficult to work with.
In order to fix these issues, a more natural API is needed that acts on nodes in
a tree like structure instead of directly on the token stream.
## Background
XML parsers generally operate in one of two modes of operation, a "DOM style"
mode in which entire documents are parsed into a tree-like data structure, the
"Document Object Model" (DOM), and an event-driven "SAX style" mode (Simple API
for XML) in which tokens are streamed one at a time and only handled if they
would trigger a callback or event.
The benefit of a DOM style node is that all information contained in the XML is
rapidly accessible and can be accessed at will, whereas in a SAX style mode
only information at the current parse location is readily available and other
arrangements have to be made to store previously visible information.
However, the SAX style mode generally provides a relatively small and stable
memory footprint, while the DOM style mode requires parsers to load an entire
document into memory.
Go currently supports a hybrid approach to this situation: entire documents or
elements may be read into native data structures, or individual tokens may be
read off the wire and handled directly by the application.
This works well for simple elements where the entire structure is known, but for
XML with an arbitrary format it forces use of the low-level token stream APIs
directly which is error prone and cumbersome.
## Proposal
Having a higher level tree-like API will allow users to manipulate arbitrary XML
in a more natural way that is compatible with Go's hybrid SAX and DOM style
approach to parsing XML.
### Implementation
An interface originally [suggested][167632824] by RSC is proposed:
[167632824]: https://github.com/golang/go/issues/13504#issuecomment-167632824
```go
// An Element represents the complete parse of a single XML element.
type Element struct {
StartElement
Child []Child
}
// A Child is an interface holding one of the element child types:
// *Element, CharData, or Comment.
type Child interface{}
```
The `*Element` type will implement `xml.Marshaler` and `xml.Unmarshaler` to make
it compatible with the existing `(*xml.Encoder) Encode` and `(*xml.Decoder)
Decode` methods for situations where entire XML elements should be consumed.
This makes it compatible with both styles of XML parsing in Go.
For example, an entire element could be unmarshaled simply:
```go
el := xml.Element{}
err := d.Decode(&el)
```
Or specific children could be unmarshaled:
```go
tok, _ := d.Token()
el := xml.Element{StartElement: tok.(StartElement)}
// Only unmarshal the child named "body"
for ; err == nil; tok, err = d.Token() {
if start, ok := tok.(StartElement); ok && start.Name.Local == "body" {
child := xml.Child{}
_ = xml.DecodeElement(&child, start)
el.Child = append(el.Child, child)
}
}
```
The author volunteers to complete this work in the next release cycle with
enough time left after this proposal is accepted and conservatively estimates
that a week of work would be required to complete the changes, including tests.
The changes themselves are relatively easy and this lengthy estimate is mostly
because the authors time is limited to evenings and weekends.
If someone who's job permitted them to work on Go were to accept the task, the
work could almost certainly be completed much quicker.
## Rationale
For large XML documents or streams that cannot be parsed all at once, the given
approach does make parsing less complicated since we still have to iterate over
the token stream.
It may be possible to fix this by adding new methods to the `*xml.Encode` and
`*xml.Decode` types specifically for dealing with elements, but the author
deems that the benefit is not worth the added complexity to the XML package.
The current solution is simple and does not preclude adding a more robust
Element based API at a later date.
## Compatibility
This proposal does not introduce any changes that would break compatibility
with existing code.
It adds two types which would need to be covered under the compatibility
promise in the future.
## Open issues (if applicable)
* For elements with large numbers of children, accessing a specific child via
a slice may be slow.
Using a map would be a simple fix, but this makes accessing arrays with few
elements slower (the crossover is somewhere around 10 elements in a very
informal benchmark).
Using a trie or some other appropriate tree-like structure can give us the
best of both worlds, but adds a great deal of complexity that is almost
certainly not worth it.
It may, however, be worth not making the children slice public (and using
accessor methods instead) so that the implementation could easily be switched
out at a later date.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/48429-go-tool-modules.md | # Proposal: Adding tool dependencies to go.mod
Author(s): Conrad Irwin
Last updated: 2024-07-18
Discussion at https://golang.org/issue/48429.
## Abstract
Authors of Go modules frequently use tools that are written in Go and distributed as Go modules.
Although Go has good support for managing dependencies imported into their programs,
the support for tools used during development is comparatively weak.
To make it easier for Go developers to use tools written in Go
`go.mod` should gain a new directive that lets module authors define which tools are needed.
## Background
Programs written in Go are often developed using tooling written in Go.
There are several examples of these, for example:
[golang.org/x/tools/cmd/stringer](https://pkg.go.dev/golang.org/x/tools/cmd/stringer) or
[github.com/kyleconroy/sqlc](https://github.com/kyleconroy/sqlc).
It is desirable that all collaborators on a given project use the same version of
tools to avoid the output changing slightly on different people’s machines.
This comes up particularly with tools like linters
(where changes over time may change whether or not the code is considered acceptable)
and code generation (where the generated code must be assumed to match the
version of the library that is linked).
The currently recommended approach to this is to create a file called `tools.go`
that imports the package containing the tools to make the dependencies visible
to the module graph.
To hide this file from the compiler, it is necessary to exclude it from builds
by adding an unused build tag such as `//go:build tools`.
To hide this file from other packages that depend on your module, it must be put
in its own package inside your module.
This approach is quite fiddly to use correctly, and still has a few downsides:
1. It is hard to type `go run golang.org/x/tools/cmd/stringer`, and so projects
often contain wrapper scripts.
2. `go run` relinks tools every time they are run, which may be noticeably slow.
People work around this by either globally installing tools, which may lead to version skew,
or by installing and using third party tooling (like [accio](https://github.com/mcandre/accio))
to manage their tools instead.
## Proposal
### New syntax in go.mod
`go.mod` gains a new directive: `tool path/to/package`.
This acts exactly as though you had a correctly set up `tools.go` that contains `import "path/to/package"`.
As with other directives, multiple `tool` directives can be factored into a block:
```
go 1.24
tool (
golang.org/x/tools/cmd/stringer
./cmd/migrate
)
```
Is equivalent to:
```
go 1.24
tool golang.org/x/tools/cmd/stringer
tool ./cmd/migrate
```
To allow automated changes `go mod edit` will gain two new parameters:
`-tool path/to/package` and `-droptool path/to/package` that add and
remove `tool` directives respectively.
### New behavior for `go get`
To allow users to easily add new tools, `go get` will gain a new parameter: `-tool`.
When `go get` is run with the `-tool` parameter, then it will download the specified
package and add it to the module graph as it does today.
Additionally it will add a new `tool` directive to the current module’s `go.mod`.
If you combine the `-tool` flag with the `@none` version,
then it will also remove the `tool` directive from your `go.mod`.
### New behavior for `go tool`
When `go tool` is run in module mode with an argument that does not match a go builtin tool,
it will search the current `go.mod` for a tool directive that matches the last
path segment and compile and run that tool similarly to `go run`.
For example if your go.mod contains:
```
tool golang.org/x/tools/cmd/stringer
require golang.org/x/tools v0.9.0
```
Then `go tool stringer` will act similarly to `go run golang.org/x/tools/cmd/stringer@v0.9.0`,
and `go tool` with no arguments will also list `stringer` as a known tool.
In the case that two tool directives end in the same path segment, `go tool X` will error.
In the case that a tool directive ends in a path segment that corresponds to a builtin Go tool,
the builtin tool will be run.
In both cases you can use `go tool path/to/package` to specify what you want unconditionally.
The only difference from `go run` is that `go tool` will cache the built binary
in `$GOCACHE/tool/<current-module-path>/<TOOLNAME>`.
Subsequent runs of `go tool X` will then check that the built binary is up to date,
and only rebuild it if necessary to speed up re-using tools.
When the Go cache is trimmed, any tools that haven't been used in the last five days will be deleted.
Five days was chosen arbitrarily as it matches the expiry used for existing artifacts.
Running `go clean -cache` will also remove all of these binaries.
### A tools metapackage
We will add a new metapackage `tools` that contains all of the tools in the current modules `go.mod`.
This would allow for the following operations:
```
# Install all tools in GOBIN
go install tools
# Build and cache tools so `go tool X` is fast:
go build tools
# Update all tools to their latest versions.
go get tools
# Install all tools in the bin/ directory
go build -o bin/ tools
```
## Rationale
This proposal tries to improve the workflow of Go developers who use tools
packaged as Go modules while developing Go modules.
It deliberately does not try and solve the problem of versioning arbitrary binaries:
anything not distributed as a Go module is out of scope.
There were a few choices that needed to be made, explained below:
1. We need a mechanism to specify an exact version of a tool to use in a given module.
Re-using the `require` directives in `go.mod` allows us to do this without introducing
a separate dependency tree or resolution path.
This also means that you can use `require` and `replace` directives to control the
dependencies used when building your tools.
2. We need a way to easily run a tool at the correct version.
Adding `go tool X` allows Go to handle versioning for you, unlike installing binaries to your path.
3. We need a way to improve the speed of running tools (compared to `go run` today)
as tools are likely to be reused.
Reusing the existing Go cache and expiry allows us to do this in a best-effort
way without filling up the users’ disk if they develop many modules with a large number of tools.
4. `go tool X` always defaults to the tool that ships with the Go distribution in case of conflict,
so that it always acts as you expect.
## Compatibility
There’s no language change, however we are changing the syntax of `go.mod`.
This should be ok, as the file-format was designed with forward compatibility in mind.
If Go adds tools to the distribution in the future that conflict with tools added
to projects’ `go.mod` files, this may cause compatibility issues in the future.
I think this is likely not a big problem in practice, as I expect new tools to be rare.
Experience from using `$PATH` as a shared namespace for executables suggests that
name conflicts in binaries can be easily avoided in practice.
## Implementation
I plan to work on this for go1.24.
## Open questions
### How should this work with Workspaces?
This should probably not do anything special with workspaces.
Because tools must be present in the `require` directives of a module,
there is no easy way to make them work at a workspace level instead of a module level.
It might be possible to try and union all tools in all modules in the workspace,
but I suggest we defer this to future work if it’s desired.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/19348-midstack-inlining.md | # Proposal: Mid-stack inlining in the Go compiler
Author(s): David Lazar, Austin Clements
Last updated: 2017-03-10
Discussion at: https://golang.org/issue/19348
See also: https://golang.org/s/go19inliningtalk
# Abstract
As of Go 1.8, the compiler does not inline mid-stack functions (functions
that call other non-inlineable functions) by default.
This is because the runtime does not have sufficient information to generate
accurate tracebacks for inlined code.
We propose fixing this limitation of tracebacks and enabling mid-stack
inlining by default.
To do this, we will add a new PC-value table to functions with inlined
calls that the runtime can use to generate accurate tracebacks, generate
DWARF inlining information for debuggers, modify runtime.Callers and
related functions to operate in terms of “logical” stack frames, and
modify tools that work with stack traces such as pprof and trace.
Preliminary results show that mid-stack inlining can improve performance by 9%
(Go1 benchmarks on both amd64 and ppc64) with a 15% increase in binary size.
Follow-on work will focus on improving the inlining heuristics to hopefully
achieve this performance with less increase in binary size.
# Background
Inlining is a fundamental compiler optimization that replaces a function
call with the body of the called function.
This eliminates call overhead but more importantly enables other compiler
optimizations, such as constant folding, common subexpression elimination,
loop-invariant code motion, and better register allocation.
As of Go 1.8, inlining happens at the AST level.
To illustrate how the Go 1.8 compiler does inlining, consider the code in
the left column of the table below.
Using heuristics, the compiler decides that the call to PutUint32 in app.go
can be inlined. It replaces the call with a copy of the AST nodes that make
up the body of PutUint32, creating new AST nodes for the arguments.
The resulting code is shown in the right column.
<table>
<tr><th>Before inlining</th><th>After inlining</th></tr>
<tr><td>
<pre>
binary.go:20 func PutUint32(b []byte, v uint32) {
binary.go:21 b[0] = byte(v >> 24)
binary.go:22 b[1] = byte(v >> 16)
binary.go:23 b[2] = byte(v >> 8)
binary.go:24 b[3] = byte(v)
binary.go:25 }
app.go:5 func main() {
app.go:6 // ...
app.go:7 PutUint32(data, input)
app.go:8 // ...
app.go:9 }
</pre>
</td><td>
<pre>
app.go:5 func main() {
app.go:6 // ...
app.go:7 var b []byte, v uint32
app.go:7 b, v = data, input
app.go:7 b[0] = byte(v >> 24)
app.go:7 b[1] = byte(v >> 16)
app.go:7 b[2] = byte(v >> 8)
app.go:7 b[3] = byte(v)
app.go:8 // ...
app.go:9 }
</pre>
</td></tr>
</table>
Notice that the compiler replaces the source positions of the inlined AST
nodes with the source position of the call.
If the inlined code panics (due to an index out of range error), the
resulting stack trace is missing a stack frame for PutUint32 and the user
doesn't get an accurate line number for what caused the panic:
<pre>
panic: runtime error: index out of range
main.main()
/home/gopher/app.go:7 +0x114
</pre>
Thus, even without aggressive inlining, the user might see inaccurate
tracebacks due to inlining.
To mitigate this problem somewhat, the Go 1.8 compiler does not inline
functions that contain calls.
This reduces the likelihood that the user will see an inaccurate traceback,
but it has a negative impact on performance.
Suppose in the example below that `intrinsicLog` is a large function that
won’t be inlined.
By default, the compiler will not inline the calls to `Log` or `LogBase`
since these functions make calls to non-inlineable functions.
However, we can force the compiler to inline these call to using the
compiler flag `-l=4`.
<table>
<tr><th>Before inlining</th><th>After inlining (-l=4)</th></tr>
<tr><td>
<pre>
math.go:41 func Log(x float64) float64 {
math.go:42 if x <= 0 {
math.go:43 panic("log x <= 0")
math.go:44 }
math.go:45 return intrinsicLog(x)
math.go:46 }
math.go:93 func LogBase(x float64, base float64) float64 {
math.go:94 n := Log(x)
math.go:95 d := Log(base)
math.go:96 return n / d
math.go:97 }
app.go:5 func main() {
app.go:6 // ...
app.go:7 val := LogBase(input1, input2)
app.go:8 // ...
app.go:9 }
</pre>
</td><td>
<pre>
app.go:5 func main() {
app.go:6 // ...
app.go:7 x, base := input1, input2
app.go:7 x1 := x
app.go:7 if x1 <= 0 {
app.go:7 panic("log x <= 0")
app.go:7 }
app.go:7 r1 := intrinsicLog(x1)
app.go:7 x2 := base
app.go:7 if x2 <= 0 {
app.go:7 panic("log x <= 0")
app.go:7 }
app.go:7 r2 := intrinsicLog(x2)
app.go:7 n := r1
app.go:7 d := r2
app.go:7 r3 := n / d
app.go:7 val := r3
app.go:8 // ...
app.go:9 }
</pre>
</td></tr>
</table>
Below we have the corresponding stack traces for these two versions of code,
caused by a call to `Log(0)`.
With mid-stack inlining, there is no stack frame or line number information
available for `LogBase`, so the user is unable to determine which input was 0.
<table>
<tr><th>Stack trace before inlining</th><th>Stack trace after inlining (-l=4)</th></tr>
<tr><td>
<pre>
panic(0x497140, 0xc42000e340)
/usr/lib/go/src/runtime/panic.go:500 +0x1a1
main.Log(0x0, 0x400de6bf542e3d2d)
/home/gopher/math.go:43 +0xa0
main.LogBase(0x4045000000000000, 0x0, 0x0)
/home/gopher/math.go:95 +0x49
main.main()
/home/gopher/app.go:7 +0x4c
</pre>
</td><td>
<pre>
panic(0x497140, 0xc42000e340)
/usr/lib/go/src/runtime/panic.go:500 +0x1a1
main.main()
/home/gopher/app.go:7 +0x161
</pre>
</td></tr>
</table>
The goal of this proposed change is to produce complete tracebacks in the
presence of inlining and to enable the compiler to inline non-leaf functions
like `Log` and `LogBase` without sacrificing debuggability.
# Proposal
## Changes to the compiler
Our approach is to modify the compiler to retain the original source
position information of inlined AST nodes and to store information about
the call site in a separate data structure.
Here is what the inlined example from above would look like instead:
<pre>
app.go:5 func main() {
app.go:6 // ...
app.go:7 x, base := input1, input2 ┓ LogBase
math.go:94 x1 := x ┃ app.go:7 ┓ Log
math.go:42 if x1 <= 0 { ┃ ┃ math.go:94
math.go:43 panic("log x <= 0") ┃ ┃
math.go:44 } ┃ ┃
math.go:45 r1 := intrinsicLog(x1) ┃ ┛
math.go:95 x2 := base ┃ ┓ Log
math.go:42 if x2 <= 0 { ┃ ┃ math.go:95
math.go:43 panic("log x <= 0") ┃ ┃
math.go:44 } ┃ ┃
math.go:45 r2 := intrinsicLog(x2) ┃ ┛
math.go:94 n := r1 ┃
math.go:95 d := r2 ┃
math.go:96 r3 := n / d ┛
app.go:7 val := r3
app.go:8 // ...
app.go:9 }
</pre>
Information about inlined calls is stored in a compiler-global data
structure called the *global inlining tree*.
Every time a call is inlined, the compiler adds a new node to the global
inlining tree that contains information about the call site (line number,
file name, and function name).
If the parent function of the inlined call is also inlined, the node for
the inner inlined call points to the node for the parent's inlined call.
For example, here is the inlining tree for the code above:
<pre>
┌──────────┐
│ LogBase │
│ app.go:7 │
└──────────┘
↑ ↑ ┌────────────┐
│ └───┤ Log │
│ │ math.go:94 │
│ └────────────┘
│ ┌────────────┐
└────────┤ Log │
│ math.go:95 │
└────────────┘
</pre>
The inlining tree is encoded as a table with one row per node in the tree.
The parent column is the row index of the node's parent in the table, or -1
if the node has no parent:
| Parent | File | Line | Function Name |
| ------ | -------------- | ---- | ----------------- |
| -1 | app.go | 7 | LogBase |
| 0 | math.go | 94 | Log |
| 0 | math.go | 95 | Log |
Every AST node is associated to a row index in the global inlining
tree/table (or -1 if the node is not the result of inlining).
We maintain this association by extending the `src.PosBase` type with a new
field called the *inlining index*.
Here is what our AST looks like now:
<pre>
app.go:5 func main() {
app.go:6 // ...
app.go:7 x, base := input1, input2 ┃ 0
math.go:94 x1 := x ┓
math.go:42 if x1 <= 0 { ┃
math.go:43 panic("log x <= 0") ┃ 1
math.go:44 } ┃
math.go:45 r1 := intrinsicLog(x1) ┛
math.go:95 x2 := base ┓
math.go:42 if x2 <= 0 { ┃
math.go:43 panic("log x <= 0") ┃ 2
math.go:44 } ┃
math.go:45 r2 := intrinsicLog(x2) ┛
math.go:94 n := r1 ┓
math.go:95 d := r2 ┃ 0
math.go:96 r3 := n / d ┛
app.go:7 val := r3
app.go:8 // ...
app.go:9 }
</pre>
As the AST nodes are lowered, their `src.PosBase` values are copied to
the resulting `Prog` pseudo-instructions.
The object writer reads the global inlining tree and the inlining index of
each `Prog` and writes this information compactly to object files.
## Changes to the object writer
The object writer creates two new tables per function.
The first table is the *local inlining tree* which contains all the
branches from the global inlining tree that are referenced by the Progs
in that function.
The second table is a PC-value table called the *pcinline table* that maps
each PC to a row index in the local inlining tree, or -1 if the PC does not
correspond to a function that has been inlined.
The local inlining tree and pcinline table are written to object files as
part of each function's pcln table.
The file names and function names in the local inlining tree are represented
using symbol references which are resolved to name offsets by the linker.
## Changes to the linker
The linker reads the new tables produced by the object writer and writes
the tables to the final binary.
We reserve `pcdata[1]` for the pcinline table and `funcdata[2]` for the
local inlining tree.
The linker writes the pcinline table to `pcdata[1]` unmodified.
The local inlining tree is encoded using 16 bytes per row (4 bytes per column).
The parent and line numbers are encoded directly as int32 values.
The file name and function names are encoded as int32 offsets into existing
global string tables.
This table must be written by the linker rather than the compiler because the
linker deduplicates these names and resolves them to global name offsets.
If necessary, we can encode the inlining tree more compactly using a varint
for each column value.
In the compact encoding, the parent column and the values in the pcinline
table would be byte offsets into the local inlining tree instead of row
indices.
In this case, the linker would have to regenerate the pcinline table.
## Changes to the runtime
The `runtime.gentraceback` function generates tracebacks and is modified
to produce logical stack frames for inlined functions.
The `gentraceback` function has two modes that are affected by inlining:
printing mode, used to print a stack trace when the runtime panics, and
pcbuf mode, which returns a buffer of PC values used by `runtime.Callers`.
In both modes, `gentraceback` checks if the current PC is mapped to a node
in the function's inlining tree by decoding the pcinline table for the
current function until it finds the value at the current PC.
If the value is -1, this instruction is not a result of inlining, so the
traceback proceeds normally.
Otherwise, `gentraceback` decodes the inlining tree and follows the path
up the tree to create the traceback.
Suppose that `pcPos` is the position information for the current PC
(obtained from the pcline and pcfile tables), `pcFunc` is the function
name for the current PC, and `st[0] -> st[1] -> ... -> st[k]` is the
path up the inlining tree for the current PC.
To print an accurate stack trace, `gentraceback` prints function names
and their corresponding position information in this order:
| Function name | Source position |
| ------------- | --------------- |
| st[0].Func | pcPos |
| st[1].Func | st[0].Pos |
| ... | ... |
| st[k].Func | st[k-1].Pos |
| pcFunc | st[k].Pos |
This process repeats for every PC in the traceback.
Note that the runtime only has sufficient information to print function
arguments and PC offsets for the last entry in this table.
Here is the resulting stack trace from the example above with our changes:
<pre>
main.Log(...)
/home/gopher/math.go:43
main.LogBase(...)
/home/gopher/math.go:95
main.main()
/home/gopher/app.go:7 +0x1c8
</pre>
## Changes to the runtime public API
With inlining, a PC may represent multiple logical calls, so we need to
clarify the meaning of some runtime APIs related to tracebacks.
For example, the `skip` argument passed to `runtime.Caller` and
`runtime.Callers` will be interpreted as the number of logical calls to skip
(rather than the number of physical stack frames to skip).
Unfortunately, the runtime.Callers API requires some modification to be
compatible with mid-stack inlining.
The result value of runtime.Callers is a slice of program counters
([]uintptr) representing physical stack frames.
If the `skip` parameter to runtime.Callers skips part-way into a physical
frame, there is no convenient way to encode that in the resulting slice.
To avoid changing the API in an incompatible way, our solution is to store
the number of skipped logical calls of the first frame in the _second_
uintptr returned by runtime.Callers.
Since this number is a small integer, we encode it as a valid PC value
into a small symbol called `runtime.skipPleaseUseCallersFrames`.
For example, if f() calls g(), g() calls `runtime.Callers(2, pcs)`, and
g() is inlined into f, then the frame for f will be partially skipped,
resulting in the following slice:
pcs = []uintptr{pc_in_f, runtime.skipPleaseUseCallersFrames+1, ...}
The `runtime.CallersFrames` function will check if the second PC is
in `runtime.skipPleaseUseCallersFrames` and skip the corresponding
number of logical calls.
We store the skip PC in `pcs[1]` instead of `pcs[0]` so that `pcs[i:]`
will truncate the captured stack trace rather than grow it for all i
(otherwise `pcs[1:]` would grow the stack trace).
Code that iterates over the PC slice from `runtime.Callers` calling
`FuncForPC` will have to be updated as described below to continue
observing complete stack traces.
# Rationale
Even with just leaf inlining, the new inlining tables increase the
size of binaries (see Preliminary Results).
However, this increase is unavoidable if the runtime is to print complete
stack traces.
Turning on mid-stack inlining increases binary size more significantly,
but we can tweak the inlining heuristic to find a good tradeoff between
binary size, performance, and build times.
We considered several alternative designs before we reached the design
described in this document.
One tempting alternative is to reuse the existing file and line PC-value
tables and simply add a new PC-value table for the “parent” PC of each
instruction, rather than a new funcdata table.
This appears to represent the same information as the proposed funcdata table.
However, some PCs might not have a parent PC to point at, for example
if an inlined call is the very first instructions in a function.
We considered adding NOP instructions to represent the parent of an inlined
call, but concluded that a separate inlining tree is more compact.
Another alternative design involves adding push and pop operations to the
PC-value table decoder for representing the inlined call stack.
We didn't prototype this design since the other designs seemed conceptually
simpler.
# Compatibility
Prior to Go 1.7, the recommended way to use `runtime.Callers` was to loop
over the returned PCs and call functions like `runtime.FuncForPC` on each
PC directly.
With mid-stack inlining, code using this pattern will observe incomplete
call stacks, since inlined frames will be omitted.
In preparation for this, the `runtime.Frames` API was introduced in Go 1.7
as a higher-level way to interpret the results of `runtime.Callers`.
We consider this to be a minor issue, since users will have had two releases
to update to `runtime.Frames` and any remaining direct uses of
`runtime.FuncForPC` will continue to work, simply in a degraded fashion.
# Implementation
David will implement this proposal during the Go 1.9 time frame.
As of the beginning of the Go 1.9 development cycle, a mostly complete
prototype of the changes the compiler, linker, and runtime is already
working.
The initial implementation goal is to make all tests pass with `-l=4`.
We will then focus on bringing tools and DWARF information up-to-date
with mid-stack inlining.
Once this support is complete, we plan to make `-l=4` the default setting.
We should also update the `debug/gosym` package to expose the new inlining
information.
*Update* (2017-03-04): CLs that add inlining info and fix stack traces
have been merged into master.
CLs that fix runtime.Callers are under submission.
# Prerequisite Changes
Prior to this work, Go had the `-l=4` flag to turn on mid-stack inlining,
but this mode had issues beyond incomplete stack traces.
For example, before we could run experiments with `-l=4`, we had to fix
inlining of variadic functions ([CL 33671](golang.org/cl/33671)),
mark certain cgo functions as uninlineable ([CL 33722](golang.org/cl/33722)),
and include linknames in export data ([CL 33911](golang.org/cl/33911)).
Before we turn on mid-stack inlining, we will have to update uses
of runtime.Callers in the runtime to use runtime.CallersFrames.
We will also have to make tests independent of inlining
(e.g., [CL 37237](golang.org/cl/37237)).
# Preliminary Results
Mid-stack inlining (`-l=4`) gives a 9% geomean improvement on the Go1
benchmarks on amd64:
https://perf.golang.org/search?q=upload:20170309.1
The same experiment on ppc64 also showed a 9-10% improvement.
The new inlining tables increase binary size by 4% without mid-stack inlining.
Mid-stack inlining increases the size of the Go1 benchmark binary by an
additional 11%.
# Open issues
One limitation of this approach is that the runtime is unable to print
the arguments to inlined calls in a stack trace.
This is because the runtime gets arguments by assuming a certain stack
layout, but there is no stack frame for inlined calls.
This proposal does not propose significant changes to the existing
inlining heuristics.
Since mid-stack inlining is now a possibility, we should revisit the
inlining heuristics in follow-on work.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/12914-monotonic.md | # Proposal: Monotonic Elapsed Time Measurements in Go
Author: Russ Cox
Last updated: January 26, 2017<br>
Discussion: [https://golang.org/issue/12914](https://golang.org/issue/12914).<br>
URL: https://golang.org/design/12914-monotonic
## Abstract
Comparison and subtraction of times observed by `time.Now` can return incorrect
results if the system wall clock is reset between the two observations.
We propose to extend the `time.Time` representation to hold an
additional monotonic clock reading for use in those calculations.
Among other benefits, this should make it impossible for a basic elapsed time
measurement using `time.Now` and `time.Since` to report a negative duration
or other result not grounded in reality.
## Background
### Clocks
A clock never keeps perfect time.
Eventually, someone notices,
decides the accumulated error—compared to a reference clock deemed more reliable—is
large enough to be worth fixing,
and resets the clock to match the reference.
As I write this, the watch on my wrist is 44 seconds ahead of the clock on my computer.
Compared to the computer, my watch gains about five seconds a day.
In a few days I will probably be bothered enough to reset it to match the computer.
My watch may not be perfect for identifying the precise
moment when a meeting should begin,
but it's quite good for measuring elapsed time.
If I start timing an event by checking the time,
and then I stop timing the event by checking again
and subtracting the two times,
the error contributed by the watch speed
will be under 0.01%.
Resetting a clock makes it better for telling time
but useless, in that moment, for measuring time.
If I reset my watch to match my computer while I am timing an event,
the time of day it shows is now more accurate,
but subtracting the start and end times for the event
will produce a measurement that includes the reset.
If I turn my watch back 44 seconds
while timing a 60-second event, I would
(unless I correct for the reset)
measure the event as taking 16 seconds.
Worse, I could measure a 10-second event
as taking −34 seconds, ending before it began.
Since I know the watch is consistently
gaining five seconds per day,
I could reduce the need for resets
by taking it to a watchmaker to adjust the
mechanism to tick ever so slightly slower.
I could also reduce the size of the resets
by doing them more often.
If, five times a day at regular intervals,
I stopped my watch for one second,
I wouldn't ever need a 44-second reset,
reducing the maximum possible error
introduced in the timing of an event.
Similarly, if instead my watch lost five seconds each day,
I could turn it forward one second five times a day
to avoid larger forward resets.
### Computer clocks
All the same problems affect computer clocks,
usually with smaller time units.
Most computers have some kind of
high-precision clock and a way to convert ticks of that clock
to an equivalent number of seconds.
Often, software on the computer compares that
clock to a higher-accuracy reference clock
[accessed over the network](https://tools.ietf.org/html/rfc5905).
If the local clock is observed to be
slightly ahead, it can be slowed a little
by dropping an occasional tick;
if slightly behind, sped up by counting some ticks twice.
If the local clock is observed to run at a
consistent speed relative to the reference clock
(for example, five seconds fast per day),
the software can change the conversion formula,
making the slight corrections less frequent.
These minor adjustments, applied regularly,
can keep the local clock matched to the reference clock
without observable resets,
giving the outward appearance of a perfectly synchronized clock.
Unfortunately, many systems fall short of this
appearance of perfection, for two main reasons.
First, some computer clocks are unreliable or
don't run at all when the computer is off.
The time starts out very wrong.
After learning the correct time from the network,
the only correction option is a reset.
Second, most computer time representations ignore leap seconds,
in part because leap seconds—unlike leap years—follow no predictable pattern:
the [IERS decides about six months in advance](https://en.wikipedia.org/wiki/Leap_second)
whether to insert (or in theory remove)
a leap second at the end of a particular calendar month.
In the real world, the leap second 23:59:60 UTC is inserted
between 23:59:59 UTC and 00:00:00 UTC.
Most computers, unable to represent 23:59:60,
instead insert a clock reset and repeat 23:59:59.
Just like my watch,
resetting a computer clock makes it better for telling time
but useless, in that moment, for measuring time.
Entering a leap second,
the clock might report 23:59:59.995 at one instant
and then report 23:59:59.005 ten milliseconds later;
subtracting these to compute elapsed time results in
−990 ms instead of +10 ms.
To avoid the problem of measuring elapsed times across clock resets,
operating systems provide access to two different clocks:
a wall clock and a monotonic clock.
Both are adjusted to move forward at a target rate of one clock second per real second,
but the monotonic clock starts at an undefined absolute value and is never reset.
The wall clock is for telling time;
the monotonic clock is for measuring time.
C/C++ programs use the operating system-provided mechanisms
for querying one clock or the other.
Java's [`System.nanoTime`](https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#nanoTime--)
is widely believed to read a monotonic clock where available,
returning an int64 counting nanoseconds since an arbitrary start point.
Python 3.3 added monotonic clock support in [PEP 418](https://www.python.org/dev/peps/pep-0418/).
The new function `time.monotonic` reads the monotonic clock, returning a float64 counting seconds since
an arbitrary start point; the old function `time.time` reads the system wall clock,
returning a float64 counting seconds since 1970.
### Go time
Go's current [time API](https://golang.org/pkg/time/),
which Rob Pike and I designed in 2011,
defines an opaque type `time.Time`,
a function `time.Now` that returns the current time,
and a method `t.Sub(u)` to subtract two times,
along with other methods interpreting a `time.Time` as a wall clock time.
These are widely used by Go programs to measure elapsed times.
The implementation of these functions only reads the system wall clock,
never the monotonic clock,
making the measurements incorrect in the event of clock resets.
Go's original target was Google's production servers, on which
the wall clock never resets: the time is set very early in
system startup, before any Go software runs,
and leap seconds are handled by a [leap smear](https://developers.google.com/time/smear#standardsmear),
spreading the extra second
over a 20-hour window in which the clock runs at 99.9986% speed
(20 hours on that clock corresponds to 20 hours and one second
in the real world).
In 2011, I hoped that the trend toward reliable, reset-free computer clocks
would continue and that Go programs could safely use the system wall clock
to measure elapsed times.
I was wrong.
Although Akamai, Amazon, and Microsoft use leap smears now too,
many systems still implement leap seconds by clock reset.
A Go program measuring a negative elapsed time during a leap second
caused [CloudFlare's recent DNS outage](https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/).
Wikipedia's
[list of examples of problems associated with the leap second](https://en.wikipedia.org/wiki/Leap_second#Examples_of_problems_associated_with_the_leap_second)
now includes CloudFlare's outage and
notes Go's time APIs as the root cause.
Beyond the problem of leap seconds, Go has also expanded to systems
in non-production environments
that may have less well-regulated clocks and consequently
more frequent clock resets.
Go must handle clock resets gracefully.
The internals of both the Go runtime and the Go time package
originally used wall time but have already been converted as much as possible
(without changing exported APIs)
to use the monotonic clock.
For example, if a goroutine runs `time.Sleep(1*time.Minute)` and then
the wall clock resets backward one hour,
in the original Go implementation that goroutine would have slept for
61 real minutes.
Today, that goroutine always sleeps for only 1 real minute.
All other time APIs using `time.Duration`, such as
`time.After`, `time.Tick`, and `time.NewTimer`,
have similarly been converted to implement those durations
using the monotonic clock.
Three standard Go APIs remain that use the system wall clock that should
more properly use the monotonic clock.
Due to [Go 1 compatibility](https://golang.org/doc/go1compat),
the types and method names used in the APIs cannot be changed.
The first problematic Go API is measurement of elapsed times.
Much code exists that uses patterns like:
start := time.Now()
... something ...
end := time.Now()
elapsed := start.Sub(end)
or, equivalently:
start := time.Now()
... something ...
elapsed := time.Since(start)
Because today `time.Now` reads the wall clock,
those measurements are wrong if the wall clock resets
between calls,
as happened at CloudFlare.
The second problematic Go API is network connection timeouts.
Originally, the `net.Conn` interface included methods to set timeouts in terms of durations:
type Conn interface {
...
SetTimeout(d time.Duration)
SetReadTimeout(d time.Duration)
SetWriteTimeout(d time.Duration)
}
This API confused users: it wasn't clear whether the duration measurement began
when the timeout was set or began anew at each I/O operation.
That is, if you call `SetReadTimeout(100*time.Millisecond)`,
does every `Read` call wait 100ms before timing out,
or do all `Read`s simply stop working 100ms after the call to `SetReadTimeout`?
To avoid this confusion, we changed and renamed the APIs for Go 1 to use
deadlines represented as `time.Time`s:
type Conn interface {
...
SetDeadline(t time.Time)
SetReadDeadline(t time.Time)
SetWriteDeadline(t time.Time)
}
These are almost always invoked by adding a duration to the current time, as in
`c.SetDeadline(time.Now().Add(5*time.Second))`,
which is longer but clearer than `SetTimeout(5*time.Second)`.
Internally, the standard implementations of `net.Conn` implement
deadlines by converting the wall clock time to monotonic clock time
immediately.
In the call `c.SetDeadline(time.Now().Add(5*time.Second))`,
the deadline exists in wall clock form only for the hundreds of nanoseconds
between adding the current wall clock time while preparing the argument
and subtracting it again at the start of `SetDeadline`.
Even so, if the system wall clock resets
during that tiny window, the deadline will be extended or contracted
by the reset amount,
resulting in possible hangs or spurious timeouts.
The third problematic Go API is [context deadlines](https://golang.org/pkg/context/#Context).
The `context.Context` interface defines a method that returns a `time.Time`:
type Context interface {
Deadline() (deadline time.Time, ok bool)
...
}
Context uses a time instead of a duration for much the same
reasons as `net.Conn`: the returned deadline
may be stored and consulted occasionally,
and using a fixed `time.Time` makes those later
consultations refer to a fixed instant instead of a floating one.
In addition to these three standard APIs, there are any number of
APIs outside the standard library that also use `time.Time`s in similar ways.
For example a common metrics collection package encourages
users to time functions by:
defer metrics.MeasureSince(description, time.Now())
It seems clear that Go must better support
computations involving elapsed times, including checking deadlines:
wall clocks do reset and cause problems on systems where Go runs.
A survey of existing Go usage suggests that about 30%
of the calls to `time.Now` (by source code appearance, not dynamic call count)
are used for measuring elapsed time and should use the system monotonic clock.
Identifying and fixing all of these would be a large undertaking,
as would developer education to correct future uses.
## Proposal
For both backwards compatibility and API simplicity,
we propose not to introduce
any new API in the time package exposing the idea of monotonic clocks.
Instead, we propose to change `time.Time` to store both a wall clock reading
and an optional, additional monotonic clock reading;
to change `time.Now` to read both clocks and return a `time.Time` containing both readings;
to change `t.Add(d)` to return a `time.Time` in which both readings (if present)
have been adjusted by `d`;
and to change `t.Sub(u)` to operate on monotonic clock times
when both `t` and `u` have them.
In this way, developers keep using `time.Now` always,
leaving the implementation to follow the rule:
use the wall clock for telling time, the monotonic clock for measuring time.
More specifically, we propose to make these changes to the [package time documentation](https://golang.org/pkg/time/),
along with corresponding changes to the implementation.
Add this paragraph to the end of the `time.Time` documentation:
> In addition to the required “wall clock” reading, a Time may contain an
> optional reading of the current process's monotonic clock,
> to provide additional precision for comparison or subtraction.
> See the “Monotonic Clocks” section in the package documentation
> for details.
Add this section to the end of the package documentation:
> Monotonic Clocks
>
> Operating systems provide both a “wall clock,” which is subject
> to resets for clock synchronization, and a “monotonic clock,” which is not.
> The general rule is that the wall clock is for telling time and the
> monotonic clock is for measuring time.
> Rather than split the API, in this package the Time returned by time.Now
> contains both a wall clock reading and a monotonic clock reading;
> later time-telling operations use the wall clock reading,
> but later time-measuring operations, specifically comparisons
> and subtractions, use the monotonic clock reading.
>
> For example, this code always computes a positive elapsed time of
> approximately 20 milliseconds, even if the wall clock is reset
> during the operation being timed:
>
> start := time.Now()
> ... operation that takes 20 milliseconds ...
> t := time.Now()
> elapsed := t.Sub(start)
>
> Other idioms, such as time.Since(start), time.Until(deadline),
> and time.Now().Before(deadline), are similarly robust against
> wall clock resets.
>
> The rest of this section gives the precise details of how operations
> use monotonic clocks, but understanding those details is not required
> to use this package.
>
> The Time returned by time.Now contains a monotonic clock reading.
> If Time t has a monotonic clock reading, t.Add(d), t.Round(d),
> or t.Truncate(d) adds the same duration to both the wall clock
> and monotonic clock readings to compute the result.
> Similarly, t.In(loc), t.Local(), or t.UTC(), which are defined to change
> only the Time's Location, pass any monotonic clock reading
> through unmodified.
> Because t.AddDate(y, m, d) is a wall time computation,
> it always strips any monotonic clock reading from its result.
>
> If Times t and u both contain monotonic clock readings, the operations
> t.After(u), t.Before(u), t.Equal(u), and t.Sub(u) are carried out using
> the monotonic clock readings alone, ignoring the wall clock readings.
> (If either t or u contains no monotonic clock reading, these operations
> use the wall clock readings.)
>
> Note that the Go == operator includes the monotonic clock reading in its comparison.
> If time values returned from time.Now and time values constructed by other means
> (for example, by time.Parse or time.Unix) are meant to compare equal when used
> as map keys, the times returned by time.Now must have the monotonic clock
> reading stripped, by setting t = t.AddDate(0, 0, 0).
> In general, prefer t.Equal(u) to t == u, since t.Equal uses the most accurate
> comparison available and correctly handles the case when only one of its
> arguments has a monotonic clock reading.
## Rationale
### Design
The main design question is whether to overload `time.Time`
or to provide a separate API for accessing the monotonic clock.
Most other systems provide separate APIs to read the wall clock
and the monotonic clock, leaving the developer to decide
between them at each use, hopefully by applying the rule stated above:
“The wall clock is for telling time.
The monotonic clock is for measuring time.”
if a developer uses a wall clock to measure time,
that program will work correctly, almost always,
except in the rare event of a clock reset.
Providing two APIs that behave the same 99% of the time
makes it very easy (and likely) for a developer to write
a program that fails only rarely and not notice.
It gets worse.
The program failures aren't random, like a race condition:
they're caused by external events, namely clock resets.
The most common clock reset in a well-run production setting
is the leap second, which occurs simultaneously on all systems.
When it does, all the copies of the program
across the entire distributed system fail simultaneously,
defeating any redundancy the system might have had.
So providing two APIs makes it very easy (and likely)
for a developer to write programs that fail only rarely,
but typically all at the same time.
This proposal instead treats the monotonic clock not as
a new concept for developers to learn but instead as an
implementation detail that can improve the accuracy of
measuring time with the existing API.
Developers don't need to learn anything new,
and the obvious code just works.
The implementation applies the rule;
the developer doesn't have to think about it.
As noted earlier,
a survey of existing Go usage (see Appendix below)
suggests that about 30% of calls to `time.Now`
are used for measuring elapsed time and should use a monotonic clock.
The same survey shows that all of those calls
are fixed by this proposal, with no change in the programs themselves.
### Simplicity
It is certainly simpler, in terms of implementation,
to provide separate routines to read the wall clock and
the monotonic clock and leave proper usage to developers.
The API in this proposal is a bit more complex to specify
and to implement but much simpler for developers to use.
No matter what, the effects of clock resets, especially leap seconds,
can be counterintuitive.
Suppose a program starts just before a leap second:
t1 := time.Now()
... 10 ms of work
t2 := time.Now()
... 10 ms of work
t3 := time.Now()
... 10 ms of work
const f = "15:04:05.000"
fmt.Println(t1.Format(f), t2.Sub(t1), t2.Format(f), t3.Sub(t2), t3.Format(f))
In Go 1.8, the program can print:
23:59:59.985 10ms 23:59:59.995 -990ms 23:59:59.005
In the design proposed above, the program instead prints:
23:59:59.985 10ms 23:59:59.995 10ms 23:59:59.005
Although in both cases the second elapsed time requires some explanation,
I'd rather explain 10ms than −990ms.
Most importantly, the actual time elapsed between the t2 and t3 calls to `time.Now`
really is 10 milliseconds.
In this case, 23:59:59.005 minus 23:59:59.995 can be 10 milliseconds,
even though the printed times would suggest −990ms,
because the printed time is incomplete.
The printed time is incomplete in other settings too.
Suppose a program starts just before noon, printing only hours and minutes:
t1 := time.Now()
... 10 ms of work
t2 := time.Now()
... 10 ms of work
t3 := time.Now()
... 10 ms of work
const f = "15:04"
fmt.Println(t1.Format(f), t2.Sub(t1), t2.Format(f), t3.Sub(t2), t3.Format(f))
In Go 1.8, the program can print:
11:59 10ms 11:59 10ms 12:00
This is easily understood, even though the printed times indicate durations of 0 and 1 minute.
The printed time is incomplete: it omits second and subsecond resolution.
Suppose instead that the program starts just before a 1am daylight savings shift.
In Go 1.8, the program can print:
00:59 10ms 00:59 10ms 02:00
This too is easily understood, even though the printed times indicate durations of 0 and 61 minutes.
The printed time is incomplete: it omits the time zone.
In the original example, printing 10ms instead of −990ms.
The printed time is incomplete: it omits clock resets.
The Go 1.8 time representation makes correct time calculations across time zone changes
by storing a time unaffected by time zone changes,
along with additional information used for printing the time.
Similarly, the proposed new time representation makes correct time calculations across clock resets
by storing a time unaffected by clock resets (the monotonic clock reading),
along with additional information used for printing the time (the wall clock reading).
## Compatibility
[Go 1 compatibility](https://golang.org/doc/go1compat)
keeps us from changing any of the types in the APIs mentioned above.
In particular, `net.Conn`'s `SetDeadline` method must continue to
take a `time.Time`, and `context.Context`'s `Deadline` method
must continue to return one.
We arrived at the current proposal due to these compatibility
constraints, but as explained in the Rationale above,
it may actually be the best choice anyway.
Also mentioned above,
about 30% of calls to `time.Now` are used for measuring elapsed time
and would be affected by this proposal.
In every case we've examined (see Appendix below), the effect is to eliminate
the possibility of incorrect measurement results due to clock resets.
We have found no existing Go code that is broken by
the improved measurements.
If the proposal is adopted, the implementation should be landed at the
start of a [release cycle](https://golang.org/wiki/Go-Release-Cycle),
to maximize the time in which to find unexpected compatibility problems.
## Implementation
The implementation work in package time is fairly straightforward,
since the runtime has already worked out access to the monotonic clock on
(nearly) all supported operating systems.
### Reading the clocks
**Precision**:
In general, operating systems provide different system operations to read the
wall clock and the monotonic clock, so the
implementation of `time.Now` must read both in sequence.
Time will advance between the calls, with the effect that even in the absence of
clock resets, `t.Sub(u)` (using monotonic clock readings) and `t.AddDate(0,0,0).Sub(u)` (using wall clock readings)
will differ slightly.
Since both cases are subtracting times obtained `time.Now`, both results are arguably correct:
any discrepancy is necessarily less than the overhead of the calls to `time.Now`.
This discrepancy only arises if code actively looks for it, by doing the subtraction or comparison both ways.
In the survey of extant Go code (see Appendix below),
we found no such code that would detect this discrepancy.
On x86 systems, Linux, macOS, and Windows convey clock information to user
processes by publishing a page of memory containing the coefficients for a formula
converting the processor's time stamp counter to monotonic clock and to wall clock readings.
A perfectly synchronized read of both clocks could be obtained in this case by
doing a single read of the time stamp counter and applying both formulas to the
same input.
This is an option if we decide it is important to eliminate the discrepancy
on commonly used systems.
This would improve precision but again it is false precision beyond the actual accuracy
of the calls.
**Overhead**:
There is obviously an overhead to having `time.Now` read two system clocks instead of one.
However, as just mentioned, the usual implementation of these operations
does not typically enter the operating system kernel,
making two calls still quite cheap.
The same “simultaneous computation” we could apply for additional precision
would also reduce the overhead.
### Time representation
The current definition of a `time.Time` is:
type Time struct {
sec int64 // seconds since Jan 1, year 1 00:00:00 UTC
nsec int32 // nanoseconds, in [0, 999999999]
loc *Location // location, for minute, hour, month, day, year
}
To add the optional monotonic clock reading, we can change the representation to:
type Time struct {
wall uint64 // wall time: 1-bit flag, 33-bit sec since 1885, 30-bit nsec
ext int64 // extended time information
loc *Location // location
}
The wall field can encode the wall time, packed into a 33-bit seconds and 30-bit nsecs
(keeping them separate avoids costly divisions).
2<sup>33</sup> seconds is 272 years, so the wall field by itself
can encode times from the years 1885 to 2157 to nanosecond precision.
If the top flag bit in `t.wall` is set, then the wall seconds are packed into `t.wall`
as just described, and `t.ext` holds
a monotonic clock reading, stored as nanoseconds since Go process startup
(translating to process start ensures we can store monotonic clock readings
even if the operating system returns a representation larger than 64 bits).
Otherwise (the top flag bit is clear), the 33-bit field in `t.wall` must be zero,
and `t.ext` holds the full 64-bit seconds since Jan 1, year 1, as in the
original Time representation.
Note that the meaning of the zero Time is unchanged.
An implication is that monotonic clock readings can only be stored
alongside wall clock readings for the years 1885 to 2157.
We only need to store monotonic clock readings in the result of `time.Now`
and derived nearby times,
and we expect those times to lie well within the range 1885 to 2157.
The low end of the range is constrained by the default boot time
used on a system with a dead clock:
in this common case, we must be able to store a
monotonic clock reading alongside the wall clock reading.
Unix-based systems often use 1970, and Windows-based systems often use 1980.
We are unaware of any systems using earlier default wall times,
but since the NTP protocol epoch uses 1900, it seemed more future-proof
to choose a year before 1900.
On 64-bit systems, there is a 32-bit padding gap between `nsec` and `loc`
in the current representation, which the new representation fills,
keeping the overall struct size at 24 bytes.
On 32-bit systems, there is no such gap, and the overall struct size
grows from 16 to 20 bytes.
# Appendix: time.Now usage
We analyzed uses of time.Now in [Go Corpus v0.01](https://github.com/rsc/corpus).
Overall estimates:
- 71% unaffected
- 29% fixed in event of wall clock time warps (subtractions or comparisons)
Basic counts:
$ cg -f $(pwd)'.*\.go$' 'time\.Now\(\)' | sed 's;//.*;;' |grep time.Now >alltimenow
$ wc -l alltimenow
16569 alltimenow
$ egrep -c 'time\.Now\(\).*time\.Now\(\)' alltimenow
63
$ 9 sed -n 's/.*(time\.Now\(\)(\.[A-Za-z0-9]+)?).*/\1/p' alltimenow | sort | uniq -c
4910 time.Now()
1511 time.Now().Add
45 time.Now().AddDate
69 time.Now().After
77 time.Now().Before
4 time.Now().Date
5 time.Now().Day
1 time.Now().Equal
130 time.Now().Format
23 time.Now().In
8 time.Now().Local
4 time.Now().Location
1 time.Now().MarshalBinary
2 time.Now().MarshalText
2 time.Now().Minute
68 time.Now().Nanosecond
14 time.Now().Round
22 time.Now().Second
37 time.Now().String
370 time.Now().Sub
28 time.Now().Truncate
570 time.Now().UTC
582 time.Now().Unix
8067 time.Now().UnixNano
17 time.Now().Year
2 time.Now().Zone
That splits into completely unaffected:
45 time.Now().AddDate
4 time.Now().Date
5 time.Now().Day
130 time.Now().Format
23 time.Now().In
8 time.Now().Local
4 time.Now().Location
1 time.Now().MarshalBinary
2 time.Now().MarshalText
2 time.Now().Minute
68 time.Now().Nanosecond
14 time.Now().Round
22 time.Now().Second
37 time.Now().String
28 time.Now().Truncate
570 time.Now().UTC
582 time.Now().Unix
8067 time.Now().UnixNano
17 time.Now().Year
2 time.Now().Zone
9631 TOTAL
and possibly affected:
4910 time.Now()
1511 time.Now().Add
69 time.Now().After
77 time.Now().Before
1 time.Now().Equal
370 time.Now().Sub
6938 TOTAL
If we pull out the possibly affected lines, the overall count is slightly higher because of the 63 lines with more than one time.Now call:
$ egrep 'time\.Now\(\)([^.]|\.(Add|After|Before|Equal|Sub)|$)' alltimenow >checktimenow
$ wc -l checktimenow
6982 checktimenow
From the start, then, 58% of time.Now uses immediately flip to wall time and are unaffected.
The remaining 42% may be affected.
Randomly sampling 100 of the 42%, we find:
- 32 unaffected (23 use wall time once; 9 use wall time multiple times)
- 68 fixed
We estimate therefore that the 42% is made up of 13% additional unaffected and 29% fixed, giving an overall total of 71% unaffected, 29% fixed.
## Unaffected
### github.com/mitchellh/packer/vendor/google.golang.org/appengine/demos/guestbook/guestbook.go:97
func handleSign(w http.ResponseWriter, r *http.Request) {
...
g := &Greeting{
Content: r.FormValue("content"),
Date: time.Now(),
}
... datastore.Put(ctx, key, g) ...
}
**Unaffected.**
The time will be used exactly once, during the serialization of g.Date in datastore.Put.
### github.com/aws/aws-sdk-go/service/databasemigrationservice/examples_test.go:887
func ExampleDatabaseMigrationService_ModifyReplicationTask() {
...
params := &databasemigrationservice.ModifyReplicationTaskInput{
...
CdcStartTime: aws.Time(time.Now()),
...
}
... svc.ModifyReplicationTask(params) ...
}
**Unaffected.**
The time will be used exactly once, during the serialization of params.CdcStartTime in svc.ModifyReplicationTask.
### github.com/influxdata/telegraf/plugins/inputs/mongodb/mongodb_data_test.go:94
d := NewMongodbData(
&StatLine{
...
Time: time.Now(),
...
},
...
)
StatLine.Time is commented as "the time at which this StatLine was generated'' and is only used
by passing to acc.AddFields, where acc is a telegraf.Accumulator.
// AddFields adds a metric to the accumulator with the given measurement
// name, fields, and tags (and timestamp). If a timestamp is not provided,
// then the accumulator sets it to "now".
// Create a point with a value, decorating it with tags
// NOTE: tags is expected to be owned by the caller, don't mutate
// it after passing to Add.
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
The non-test implementation of Accumulator calls t.Round, which will convert to wall time.
**Unaffected.**
### github.com/spf13/fsync/fsync_test.go:23
// set times in the past to make sure times are synced, not accidentally
// the same
tt := time.Now().Add(-1 * time.Hour)
check(os.Chtimes("src/a/b", tt, tt))
check(os.Chtimes("src/a", tt, tt))
check(os.Chtimes("src/c", tt, tt))
check(os.Chtimes("src", tt, tt))
**Unaffected.**
### github.com/flynn/flynn/vendor/github.com/gorilla/handlers/handlers.go:66
t := time.Now()
...
writeLog(h.writer, req, url, t, logger.Status(), logger.Size())
writeLog calls buildCommonLogLine, which eventually calls t.Format.
**Unaffected.**
### github.com/ncw/rclone/vendor/google.golang.org/grpc/server.go:586
if err == nil && outPayload != nil {
outPayload.SentTime = time.Now()
stats.HandleRPC(stream.Context(), outPayload)
}
SentTime seems to never be used. Client code could call stats.RegisterRPCHandler to do stats processing and look at SentTime.
Any use of time.Since(SentTime) would be improved by having SentTime be monotonic here.
There are no calls to stats.RegisterRPCHandler in the entire corpus.
**Unaffected.**
### github.com/openshift/origin/vendor/github.com/influxdata/influxdb/models/points.go:1316
func (p *point) UnmarshalBinary(b []byte) error {
...
p.time = time.Now()
p.time.UnmarshalBinary(b[i:])
...
}
That's weird. It looks like it is setting p.time in case of an error in UnmarshalBinary, instead of checking for and propagating an error. All the other ways that a p.time is initalized end up using non-monotonic times, because they came from time.Unix or t.Round. Assuming that bad decodings are rare, going to call it unaffected.
**Unaffected** (but not completely sure).
### github.com/zyedidia/micro/cmd/micro/util.go
// GetModTime returns the last modification time for a given file
// It also returns a boolean if there was a problem accessing the file
func GetModTime(path string) (time.Time, bool) {
info, err := os.Stat(path)
if err != nil {
return time.Now(), false
}
return info.ModTime(), true
}
The result is recorded in the field Buffer.ModTime and then checked against future calls to GetModTime to see if the file changed:
// We should only use last time's eventhandler if the file wasn't by someone else in the meantime
if b.ModTime == buffer.ModTime {
b.EventHandler = buffer.EventHandler
b.EventHandler.buf = b
}
and
if modTime != b.ModTime {
choice, canceled := messenger.YesNoPrompt("The file has changed since it was last read. Reload file? (y,n)")
...
}
Normally Buffer.ModTime will be a wall time, but if the file doesn't exist Buffer.ModTime will be a monotonic time that will not compare == to any file time. That's the desired behavior here.
**Unaffected** (or maybe fixed).
### github.com/gravitational/teleport/lib/auth/init_test.go:59
// test TTL by converting the generated cert to text -> back and making sure ExpireAfter is valid
ttl := time.Second * 10
expiryDate := time.Now().Add(ttl)
bytes, err := t.GenerateHostCert(priv, pub, "id1", "example.com", teleport.Roles{teleport.RoleNode}, ttl)
c.Assert(err, IsNil)
pk, _, _, _, err := ssh.ParseAuthorizedKey(bytes)
c.Assert(err, IsNil)
copy, ok := pk.(*ssh.Certificate)
c.Assert(ok, Equals, true)
c.Assert(uint64(expiryDate.Unix()), Equals, copy.ValidBefore)
This is jittery, in the sense that the computed expiryDate may not exactly match the cert generation that—one must assume—grabs the current time and adds the passed ttl to it to compute ValidBefore. It's unclear without digging exactly how the cert gets generated (there seems to be an RPC, but I don't know if it's to a test server in the same process). Either way, the two times are only possibly equal because of the rounding to second granularity. Even today, if the call expiryDate := time.Now().Add(ttl) happens 1 nanosecond before a wall time second boundary, this test will fail. Moving to monotonic time will not change the fact that it's jittery.
**Unaffected.**
### github.com/aws/aws-sdk-go/private/model/api/operation.go:420
case "timestamp":
str = `aws.Time(time.Now())`
This is the example generator for the AWS documentation. An aws.Time is always just being put into a structure to send over the wire in JSON format to AWS, so these remain OK.
**Unaffected.**
### github.com/influxdata/telegraf/plugins/inputs/mongodb/mongodb_data_test.go:17
d := NewMongodbData(
&StatLine{
...
Time: time.Now(),
...
},
...
}
**Unaffected** (see above from same file).
### github.com/aws/aws-sdk-go/service/datapipeline/examples_test.go:36
params := &datapipeline.ActivatePipelineInput{
...
StartTimestamp: aws.Time(time.Now()),
}
resp, err := svc.ActivatePipeline(params)
The svc.ActivatePipeline call serializes StartTimestamp to JSON (just once).
**Unaffected.**
### github.com/jessevdk/go-flags/man.go:177
t := time.Now()
fmt.Fprintf(wr, ".TH %s 1 \"%s\"\n", manQuote(p.Name), t.Format("2 January 2006"))
**Unaffected.**
### k8s.io/heapster/events/manager/manager_test.go:28
batch := &core.EventBatch{
Timestamp: time.Now(),
Events: []*kube_api.Event{},
}
Later used as:
buffer.WriteString(fmt.Sprintf("EventBatch Timestamp: %s\n", batch.Timestamp))
**Unaffected.**
### k8s.io/heapster/metrics/storage/podmetrics/reststorage.go:121
CreationTimestamp: unversioned.NewTime(time.Now())
But CreationTimestamp is only ever checked for being the zero time or not.
**Unaffected.**
### github.com/revel/revel/server.go:46
start := time.Now()
...
// Revel request access log format
// RequestStartTime ClientIP ResponseStatus RequestLatency HTTPMethod URLPath
// Sample format:
// 2016/05/25 17:46:37.112 127.0.0.1 200 270.157µs GET /
requestLog.Printf("%v %v %v %10v %v %v",
start.Format(requestLogTimeFormat),
ClientIP(r),
c.Response.Status,
time.Since(start),
r.Method,
r.URL.Path,
)
**Unaffected.**
### github.com/hashicorp/consul/command/agent/agent.go:1426
Expires: time.Now().Add(check.TTL).Unix(),
**Unaffected.**
### github.com/drone/drone/server/login.go:143
exp := time.Now().Add(time.Hour * 72).Unix()
**Unaffected.**
### github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/transport/listener.go:113:
tmpl := x509.Certificate{
NotBefore: time.Now(),
NotAfter: time.Now().Add(365 * (24 * time.Hour)),
...
}
...
derBytes, err := x509.CreateCertificate(rand.Reader, &tmpl, &tmpl, &priv.PublicKey, priv)
**Unaffected.**
### github.com/ethereum/go-ethereum/swarm/api/http/server.go:189
http.ServeContent(w, r, "", time.Now(), bytes.NewReader([]byte(newKey)))
eventually uses the passed time in formatting:
w.Header().Set("Last-Modified", modtime.UTC().Format(TimeFormat))
**Unaffected.**
### github.com/hashicorp/consul/vendor/google.golang.org/grpc/call.go:187
if sh != nil {
ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method})
begin := &stats.Begin{
Client: true,
BeginTime: time.Now(),
FailFast: c.failFast,
}
sh.HandleRPC(ctx, begin)
}
defer func() {
if sh != nil {
end := &stats.End{
Client: true,
EndTime: time.Now(),
Error: e,
}
sh.HandleRPC(ctx, end)
}
}()
If something subtracted BeginTime and EndTime, that would be fixed by monotonic times.
I don't see any implementations of StatsHandler in the tree, though, so sh must be nil.
**Unaffected.**
### github.com/hashicorp/vault/builtin/logical/pki/backend_test.go:396
if !cert.NotBefore.Before(time.Now().Add(-10 * time.Second)) {
return nil, fmt.Errorf("Validity period not far enough in the past")
}
cert.NotBefore is usually the result of decoding an wire format certificate,
so it's not monotonic, so the time will collapse to wall time during the Before check.
**Unaffected.**
### github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/namespace/lifecycle/admission_test.go:194
fakeClock := clock.NewFakeClock(time.Now())
The clock being implemented does Since, After, and other relative manipulation only.
**Unaffected.**
## Unaffected (but uses time.Time as wall time multiple times)
These are split out because an obvious optimization would be to store just the monotonic time
and rederive the wall time using the current wall-vs-monotonic correspondence from the
operating system. Using a wall form multiple times in this case could show up as jitter.
The proposal does _not_ suggest this optimization, precisely because of cases like these.
### github.com/docker/distribution/registry/storage/driver/inmemory/mfs.go:195
// mkdir creates a child directory under d with the given name.
func (d *dir) mkdir(name string) (*dir, error) {
... d.mod = time.Now() ...
}
ends up being used by
fi := storagedriver.FileInfoFields{
Path: path,
IsDir: found.isdir(),
ModTime: found.modtime(),
}
which will result in that time being returned by an os.FileInfo implementation's ModTime method.
**Unaffected** (but uses time multiple times).
### github.com/minio/minio/cmd/server-startup-msg_test.go:52
// given
var expiredDate = time.Now().Add(time.Hour * 24 * (30 - 1)) // 29 days.
var fakeCerts = []*x509.Certificate{
... NotAfter: expiredDate ...
}
expectedMsg := colorBlue("\nCertificate expiry info:\n") +
colorBold(fmt.Sprintf("#1 Test cert will expire on %s\n", expiredDate))
msg := getCertificateChainMsg(fakeCerts)
if msg != expectedMsg {
t.Fatalf("Expected message was: %s, got: %s", expectedMsg, msg)
}
**Unaffected** (but uses time multiple times).
### github.com/pingcap/tidb/expression/builtin_string_test.go:42
{types.Time{Time: types.FromGoTime(time.Now()), Fsp: 6, Type: mysql.TypeDatetime}, 26},
The call to FromGoTime does:
func FromGoTime(t gotime.Time) TimeInternal {
year, month, day := t.Date()
hour, minute, second := t.Clock()
microsecond := t.Nanosecond() / 1000
return newMysqlTime(year, int(month), day, hour, minute, second, microsecond)
}
**Unaffected** (but uses time multiple times).
### github.com/docker/docker/vendor/github.com/docker/distribution/registry/client/repository.go:750
func (bs *blobs) Create(ctx context.Context, options ...distribution.BlobCreateOption) (distribution.BlobWriter, error) {
...
return &httpBlobUpload{
statter: bs.statter,
client: bs.client,
uuid: uuid,
startedAt: time.Now(),
location: location,
}, nil
}
That field is used to implement distribution.BlobWriter interface's StartedAt method, which is eventually copied into a handlers.blobUploadState, which is sometimes serialized to JSON and reconstructed. The serialization seems to be the single use.
**Unaffected** (but not completely sure about use count).
### github.com/pingcap/pd/_vendor/vendor/golang.org/x/net/internal/timeseries/timeseries.go:83
// A Clock tells the current time.
type Clock interface {
Time() time.Time
}
type defaultClock int
var defaultClockInstance defaultClock
func (defaultClock) Time() time.Time { return time.Now() }
Let's look at how that gets used.
The main use is to get a now time and then check whether
if ts.levels[0].end.Before(now) {
ts.advance(now)
}
but levels[0].end was rounded, meaning its a wall time. advance then does:
if !t.After(ts.levels[0].end) {
return
}
for i := 0; i < len(ts.levels); i++ {
level := ts.levels[i]
if !level.end.Before(t) {
break
}
// If the time is sufficiently far, just clear the level and advance
// directly.
if !t.Before(level.end.Add(level.size * time.Duration(ts.numBuckets))) {
for _, b := range level.buckets {
ts.resetObservation(b)
}
level.end = time.Unix(0, (t.UnixNano()/level.size.Nanoseconds())*level.size.Nanoseconds())
}
for t.After(level.end) {
level.end = level.end.Add(level.size)
level.newest = level.oldest
level.oldest = (level.oldest + 1) % ts.numBuckets
ts.resetObservation(level.buckets[level.newest])
}
t = level.end
}
**Unaffected** (but uses time multiple times).
### github.com/astaxie/beego/logs/logger_test.go:24
func TestFormatHeader_0(t *testing.T) {
tm := time.Now()
if tm.Year() >= 2100 {
t.FailNow()
}
dur := time.Second
for {
if tm.Year() >= 2100 {
break
}
h, _ := formatTimeHeader(tm)
if tm.Format("2006/01/02 15:04:05 ") != string(h) {
t.Log(tm)
t.FailNow()
}
tm = tm.Add(dur)
dur *= 2
}
}
**Unaffected** (but uses time multiple times).
### github.com/attic-labs/noms/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4_test.go:418
ctx := &signingCtx{
...
Time: time.Now(),
ExpireTime: 5 * time.Second,
}
ctx.buildCanonicalString()
expected := "https://example.org/bucket/key-._~,!@#$%^&*()?Foo=z&Foo=o&Foo=m&Foo=a"
assert.Equal(t, expected, ctx.Request.URL.String())
ctx is used as:
ctx.formattedTime = ctx.Time.UTC().Format(timeFormat)
ctx.formattedShortTime = ctx.Time.UTC().Format(shortTimeFormat)
and then ctx.formattedTime is used sometimes and ctx.formattedShortTime is used other times.
**Unaffected** (but uses time multiple times).
### github.com/zenazn/goji/example/models.go:21
var Greets = []Greet{
{"carl", "Welcome to Gritter!", time.Now()},
{"alice", "Wanna know a secret?", time.Now()},
{"bob", "Okay!", time.Now()},
{"eve", "I'm listening...", time.Now()},
}
used by:
// Write out a representation of the greet
func (g Greet) Write(w io.Writer) {
fmt.Fprintf(w, "%s\n@%s at %s\n---\n", g.Message, g.User,
g.Time.Format(time.UnixDate))
}
**Unaffected** (but may use wall representation multiple times).
### github.com/afex/hystrix-go/hystrix/rolling/rolling_timing.go:77
r.Mutex.RLock()
now := time.Now()
bucket, exists := r.Buckets[now.Unix()]
r.Mutex.RUnlock()
if !exists {
r.Mutex.Lock()
defer r.Mutex.Unlock()
r.Buckets[now.Unix()] = &timingBucket{}
bucket = r.Buckets[now.Unix()]
}
**Unaffected** (but uses wall representation multiple times).
## Fixed
### github.com/hashicorp/vault/vendor/golang.org/x/net/http2/transport.go:721
func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
...
cc.lastActive = time.Now()
...
}
matches against:
func traceGotConn(req *http.Request, cc *ClientConn) {
... ci.IdleTime = time.Now().Sub(cc.lastActive) ...
}
**Fixed.**
Only for debugging, though.
### github.com/docker/docker/vendor/github.com/hashicorp/serf/serf/serf.go:1417
// reap is called with a list of old members and a timeout, and removes
// members that have exceeded the timeout. The members are removed from
// both the old list and the members itself. Locking is left to the caller.
func (s *Serf) reap(old []*memberState, timeout time.Duration) []*memberState {
now := time.Now()
...
for i := 0; i < n; i++ {
...
// Skip if the timeout is not yet reached
if now.Sub(m.leaveTime) <= timeout {
continue
}
...
}
...
}
and m.leaveTime is always initialized by calling time.Now.
**Fixed.**
### github.com/hashicorp/consul/consul/acl_replication.go:173
defer metrics.MeasureSince([]string{"consul", "leader", "updateLocalACLs"}, time.Now())
This is the canonical way to use the github.com/armon/go-metrics package.
func MeasureSince(key []string, start time.Time) {
globalMetrics.MeasureSince(key, start)
}
func (m *Metrics) MeasureSince(key []string, start time.Time) {
...
now := time.Now()
elapsed := now.Sub(start)
msec := float32(elapsed.Nanoseconds()) / float32(m.TimerGranularity)
m.sink.AddSample(key, msec)
}
**Fixed.**
### github.com/flynn/flynn/vendor/gopkg.in/mgo.v2/session.go:3598
if iter.timeout >= 0 {
if timeout.IsZero() {
timeout = time.Now().Add(iter.timeout)
}
if time.Now().After(timeout) {
iter.timedout = true
...
}
}
**Fixed.**
### github.com/huichen/wukong/examples/benchmark.go:173
t4 := time.Now()
done := make(chan bool)
recordResponse := recordResponseLock{}
recordResponse.count = make(map[string]int)
for iThread := 0; iThread < numQueryThreads; iThread++ {
go search(done, &recordResponse)
}
for iThread := 0; iThread < numQueryThreads; iThread++ {
<-done
}
// 记录时间并计算分词速度
t5 := time.Now()
log.Printf("搜索平均响应时间 %v 毫秒",
t5.Sub(t4).Seconds()*1000/float64(numRepeatQuery*len(searchQueries)))
log.Printf("搜索吞吐量每秒 %v 次查询",
float64(numRepeatQuery*numQueryThreads*len(searchQueries))/
t5.Sub(t4).Seconds())
The first print is "Search average response time %v milliseconds" and the second is "Search Throughput %v queries per second."
**Fixed.**
### github.com/ncw/rclone/vendor/google.golang.org/grpc/call.go:171
if EnableTracing {
...
if deadline, ok := ctx.Deadline(); ok {
c.traceInfo.firstLine.deadline = deadline.Sub(time.Now())
}
...
}
Here ctx is a context.Context. We should probably arrange for ctx.Deadline to return monotonic times.
If it does, then this code is fixed.
If it does not, then this code is unaffected.
**Fixed.**
### github.com/hashicorp/consul/consul/fsm.go:281
defer metrics.MeasureSince([]string{"consul", "fsm", "prepared-query", string(req.Op)}, time.Now())
See MeasureSince above.
**Fixed.**
### github.com/docker/libnetwork/vendor/github.com/Sirupsen/logrus/text_formatter.go:27
var (
baseTimestamp time.Time
isTerminal bool
)
func init() {
baseTimestamp = time.Now()
isTerminal = IsTerminal()
}
func miniTS() int {
return int(time.Since(baseTimestamp) / time.Second)
}
**Fixed.**
### github.com/flynn/flynn/vendor/golang.org/x/net/http2/go17.go:54
if ci.WasIdle && !cc.lastActive.IsZero() {
ci.IdleTime = time.Now().Sub(cc.lastActive)
}
See above.
**Fixed.**
### github.com/zyedidia/micro/cmd/micro/eventhandler.go:102
// Remove creates a remove text event and executes it
func (eh *EventHandler) Remove(start, end Loc) {
e := &TextEvent{
C: eh.buf.Cursor,
EventType: TextEventRemove,
Start: start,
End: end,
Time: time.Now(),
}
eh.Execute(e)
}
The time here is used by
// Undo the first event in the undo stack
func (eh *EventHandler) Undo() {
t := eh.UndoStack.Peek()
...
startTime := t.Time.UnixNano() / int64(time.Millisecond)
...
for {
t = eh.UndoStack.Peek()
...
if startTime-(t.Time.UnixNano()/int64(time.Millisecond)) > undoThreshold {
return
}
startTime = t.Time.UnixNano() / int64(time.Millisecond)
...
}
}
If this avoided the call to UnixNano (used t.Sub instead), then all the times involved would be monotonic and the elapsed time computation would be independent of wall time. As written, a wall time adjustment during Undo will still break the code. Without any monotonic times, a wall time adjustment before Undo also breaks the code; that no longer happens.
**Fixed.*
### github.com/ethereum/go-ethereum/cmd/geth/chaincmd.go:186
start = time.Now()
fmt.Println("Compacting entire database...")
if err = db.LDB().CompactRange(util.Range{}); err != nil {
utils.Fatalf("Compaction failed: %v", err)
}
fmt.Printf("Compaction done in %v.\n\n", time.Since(start))
**Fixed.**
### github.com/drone/drone/shared/oauth2/oauth2.go:176
// Expired reports whether the token has expired or is invalid.
func (t *Token) Expired() bool {
if t.AccessToken == "" {
return true
}
if t.Expiry.IsZero() {
return false
}
return t.Expiry.Before(time.Now())
}
t.Expiry is set with:
if b.ExpiresIn == 0 {
tok.Expiry = time.Time{}
} else {
tok.Expiry = time.Now().Add(time.Duration(b.ExpiresIn) * time.Second)
}
**Fixed.**
### github.com/coreos/etcd/auth/simple_token.go:88
for {
select {
case t := <-tm.addSimpleTokenCh:
tm.tokens[t] = time.Now().Add(simpleTokenTTL)
case t := <-tm.resetSimpleTokenCh:
if _, ok := tm.tokens[t]; ok {
tm.tokens[t] = time.Now().Add(simpleTokenTTL)
}
case t := <-tm.deleteSimpleTokenCh:
delete(tm.tokens, t)
case <-tokenTicker.C:
nowtime := time.Now()
for t, tokenendtime := range tm.tokens {
if nowtime.After(tokenendtime) {
tm.deleteTokenFunc(t)
delete(tm.tokens, t)
}
}
case waitCh := <-tm.stopCh:
tm.tokens = make(map[string]time.Time)
waitCh <- struct{}{}
return
}
}
**Fixed.**
### github.com/docker/docker/cli/command/node/ps_test.go:105
return []swarm.Task{
*Task(TaskID("taskID1"), ServiceID("failure"),
WithStatus(Timestamp(time.Now().Add(-2*time.Hour)), StatusErr("a task error"))),
*Task(TaskID("taskID2"), ServiceID("failure"),
WithStatus(Timestamp(time.Now().Add(-3*time.Hour)), StatusErr("a task error"))),
*Task(TaskID("taskID3"), ServiceID("failure"),
WithStatus(Timestamp(time.Now().Add(-4*time.Hour)), StatusErr("a task error"))),
}, nil
It's just a test, but Timestamp sets the Timestamp field in the swarm.TaskStatus used eventually in docker/cli/command/task/print.go:
strings.ToLower(units.HumanDuration(time.Since(task.Status.Timestamp))),
Having a monotonic time in the swam.TaskStatus makes time.Since more accurate.
**Fixed.**
### github.com/docker/docker/integration-cli/docker_api_attach_test.go:130
conn.SetReadDeadline(time.Now().Add(time.Second))
**Fixed.**
### github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:1696
timeout := 2 * time.Minute
for start := time.Now(); time.Since(start) < timeout; time.Sleep(5 * time.Second) {
...
}
**Fixed.**
### github.com/onsi/gomega/internal/asyncassertion/async_assertion_test.go:318
t := time.Now()
failures := InterceptGomegaFailures(func() {
Eventually(c, 0.1).Should(Receive())
})
Ω(time.Since(t)).Should(BeNumerically("<", 90*time.Millisecond))
**Fixed.**
### github.com/hashicorp/vault/physical/consul.go:344
defer metrics.MeasureSince([]string{"consul", "list"}, time.Now())
**Fixed.**
### github.com/hyperledger/fabric/vendor/golang.org/x/net/context/go17.go:62
// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)).
// ...
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) {
return WithDeadline(parent, time.Now().Add(timeout))
}
**Fixed.**
### github.com/hashicorp/consul/consul/state/tombstone_gc.go:134
// nextExpires is used to calculate the next expiration time
func (t *TombstoneGC) nextExpires() time.Time {
expires := time.Now().Add(t.ttl)
remain := expires.UnixNano() % int64(t.granularity)
adj := expires.Add(t.granularity - time.Duration(remain))
return adj
}
used by:
func (t *TombstoneGC) Hint(index uint64) {
expires := t.nextExpires()
...
// Check for an existing expiration timer
exp, ok := t.expires[expires]
if ok {
...
return
}
// Create new expiration time
t.expires[expires] = &expireInterval{
maxIndex: index,
timer: time.AfterFunc(expires.Sub(time.Now()), func() {
t.expireTime(expires)
}),
}
}
The granularity rounding will usually reuslt in something that can be used in a map key but not always.
The code is using the rounding only as an optimization, so it doesn't actually matter if a few extra keys get generated.
More importantly, the time passd to time.AfterFunc ends up monotonic, so that timers fire correctly.
**Fixed.**
### github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/storage/etcd/etcd_helper.go:310
startTime := time.Now()
...
metrics.RecordEtcdRequestLatency("get", getTypeName(listPtr), startTime)
which ends up in:
func RecordEtcdRequestLatency(verb, resource string, startTime time.Time) {
etcdRequestLatenciesSummary.WithLabelValues(verb, resource).Observe(float64(time.Since(startTime) / time.Microsecond))
}
**Fixed.**
### github.com/pingcap/pd/server/util.go:215
start := time.Now()
ctx, cancel := context.WithTimeout(c.Ctx(), requestTimeout)
resp, err := m.Status(ctx, endpoint)
cancel()
if cost := time.Now().Sub(start); cost > slowRequestTime {
log.Warnf("check etcd %s status, resp: %v, err: %v, cost: %s", endpoint, resp, err, cost)
}
**Fixed.**
### github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/instrumented_services.go:235
func (in instrumentedImageManagerService) ImageStatus(image *runtimeApi.ImageSpec) (*runtimeApi.Image, error) {
...
defer recordOperation(operation, time.Now())
...
}
// recordOperation records the duration of the operation.
func recordOperation(operation string, start time.Time) {
metrics.RuntimeOperations.WithLabelValues(operation).Inc()
metrics.RuntimeOperationsLatency.WithLabelValues(operation).Observe(metrics.SinceInMicroseconds(start))
}
**Fixed.**
### github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/instrumented_docker.go:58
defer recordOperation(operation, time.Now())
**Fixed.** (see previous)
### github.com/coreos/etcd/tools/functional-tester/etcd-runner/command/global.go:103
start := time.Now()
for i := 1; i < len(rcs)*rounds+1; i++ {
select {
case <-finished:
if i%100 == 0 {
fmt.Printf("finished %d, took %v\n", i, time.Since(start))
start = time.Now()
}
case <-time.After(time.Minute):
log.Panic("no progress after 1 minute!")
}
}
**Fixed.**
### github.com/reducedb/encoding/benchtools/benchtools.go:98
now := time.Now()
...
if err = codec.Compress(in, inpos, len(in), out, outpos); err != nil {
return 0, nil, err
}
since := time.Since(now).Nanoseconds()
**Fixed.**
### github.com/docker/swarm/vendor/github.com/hashicorp/consul/api/semaphore.go:200
start := time.Now()
attempts := 0
WAIT:
// Check if we should quit
select {
case <-stopCh:
return nil, nil
default:
}
// Handle the one-shot mode.
if s.opts.SemaphoreTryOnce && attempts > 0 {
elapsed := time.Now().Sub(start)
if elapsed > qOpts.WaitTime {
return nil, nil
}
qOpts.WaitTime -= elapsed
}
attempts++
... goto WAIT ...
**Fixed.**
### github.com/gravitational/teleport/lib/reversetunnel/localsite.go:83
func (s *localSite) GetLastConnected() time.Time {
return time.Now()
}
This gets recorded in a services.Site's LastConnected field, the only use of which is:
c.Assert(time.Since(sites[0].LastConnected).Seconds() < 5, Equals, true)
**Fixed.**
### github.com/coreos/etcd/tools/benchmark/cmd/watch.go:201
st := time.Now()
for range r.Events {
results <- report.Result{Start: st, End: time.Now()}
bar.Increment()
atomic.AddInt32(&nrRecvCompleted, 1)
}
Those fields get used by
func (res *Result) Duration() time.Duration { return res.End.Sub(res.Start) }
func (r *report) processResult(res *Result) {
if res.Err != nil {
r.errorDist[res.Err.Error()]++
return
}
dur := res.Duration()
r.lats = append(r.lats, dur.Seconds())
r.avgTotal += dur.Seconds()
if r.sps != nil {
r.sps.Add(res.Start, dur)
}
}
The duration computation is fixed by use of monotonic time. The call tp r.sps.Add buckets the start time by converting to Unix seconds and is therefore unaffected (start time only used once other than the duration calculation, so no visible jitter).
**Fixed.**
### github.com/flynn/flynn/vendor/github.com/flynn/oauth2/internal/token.go:191
token.Expiry = time.Now().Add(time.Duration(expires) * time.Second)
used by:
func (t *Token) expired() bool {
if t.Expiry.IsZero() {
return false
}
return t.Expiry.Add(-expiryDelta).Before(time.Now())
}
Only partly fixed because sometimes token.Expiry has been loaded from a JSON serialization of a fixed time. But in the case where the expiry was set from a duration, the duration is now correctly enforced.
**Fixed.**
### github.com/hashicorp/consul/consul/fsm.go:266
defer metrics.MeasureSince([]string{"consul", "fsm", "coordinate", "batch-update"}, time.Now())
**Fixed.**
### github.com/openshift/origin/vendor/github.com/coreos/etcd/clientv3/lease.go:437
now := time.Now()
l.mu.Lock()
for id, ka := range l.keepAlives {
if ka.nextKeepAlive.Before(now) {
tosend = append(tosend, id)
}
}
l.mu.Unlock()
ka.nextKeepAlive is set to either time.Now() or
nextKeepAlive := time.Now().Add(1 + time.Duration(karesp.TTL/3)*time.Second)
**Fixed.**
### github.com/eBay/fabio/cert/source_test.go:567
func waitFor(timeout time.Duration, up func() bool) bool {
until := time.Now().Add(timeout)
for {
if time.Now().After(until) {
return false
}
if up() {
return true
}
time.Sleep(100 * time.Millisecond)
}
}
**Fixed.**
### github.com/lucas-clemente/quic-go/ackhandler/sent_packet_handler_test.go:524
err := handler.ReceivedAck(&frames.AckFrame{LargestAcked: 1}, 1, time.Now())
Expect(err).NotTo(HaveOccurred())
Expect(handler.rttStats.LatestRTT()).To(BeNumerically("~", 10*time.Minute, 1*time.Second))
err = handler.ReceivedAck(&frames.AckFrame{LargestAcked: 2}, 2, time.Now())
Expect(err).NotTo(HaveOccurred())
Expect(handler.rttStats.LatestRTT()).To(BeNumerically("~", 5*time.Minute, 1*time.Second))
err = handler.ReceivedAck(&frames.AckFrame{LargestAcked: 6}, 3, time.Now())
Expect(err).NotTo(HaveOccurred())
Expect(handler.rttStats.LatestRTT()).To(BeNumerically("~", 1*time.Minute, 1*time.Second))
where:
func (h *sentPacketHandler) ReceivedAck(ackFrame *frames.AckFrame, withPacketNumber protocol.PacketNumber, rcvTime time.Time) error {
...
timeDelta := rcvTime.Sub(packet.SendTime)
h.rttStats.UpdateRTT(timeDelta, ackFrame.DelayTime, rcvTime)
...
}
and packet.SendTime is initialized (earlier) with time.Now.
**Fixed.**
### github.com/CodisLabs/codis/pkg/proxy/redis/conn.go:140
func (w *connWriter) Write(b []byte) (int, error) {
...
w.LastWrite = time.Now()
...
}
used by:
func (p *FlushEncoder) NeedFlush() bool {
...
if p.MaxInterval < time.Since(p.Conn.LastWrite) {
return true
}
...
}
**Fixed.**
### github.com/docker/docker/vendor/github.com/docker/swarmkit/manager/scheduler/scheduler.go:173
func (s *Scheduler) Run(ctx context.Context) error {
...
var (
debouncingStarted time.Time
commitDebounceTimer *time.Timer
)
...
// Watch for changes.
for {
select {
case event := <-updates:
switch v := event.(type) {
case state.EventCommit:
if commitDebounceTimer != nil {
if time.Since(debouncingStarted) > maxLatency {
...
}
} else {
commitDebounceTimer = time.NewTimer(commitDebounceGap)
debouncingStarted = time.Now()
...
}
}
...
}
}
**Fixed.**
### golang.org/x/net/nettest/conntest.go:361
c1.SetDeadline(time.Now().Add(10 * time.Millisecond))
**Fixed.**
### github.com/minio/minio/vendor/github.com/eapache/go-resiliency/breaker/breaker.go:120
expiry := b.lastError.Add(b.timeout)
if time.Now().After(expiry) {
b.errors = 0
}
where b.lastError is set using time.Now.
**Fixed.**
### github.com/pingcap/tidb/store/tikv/client.go:65
start := time.Now()
defer func() { sendReqHistogram.WithLabelValues("cop").Observe(time.Since(start).Seconds()) }()
**Fixed.**
### github.com/coreos/etcd/cmd/vendor/golang.org/x/net/context/go17.go:62
return WithDeadline(parent, time.Now().Add(timeout))
**Fixed** (see above).
### github.com/coreos/rkt/rkt/image/common_test.go:161
maxAge := 10
for _, tt := range tests {
age := time.Now().Add(time.Duration(tt.age) * time.Second)
got := useCached(age, maxAge)
if got != tt.use {
t.Errorf("expected useCached(%v, %v) to return %v, but it returned %v", age, maxAge, tt.use, got)
}
}
where:
func useCached(downloadTime time.Time, maxAge int) bool {
freshnessLifetime := int(time.Now().Sub(downloadTime).Seconds())
if maxAge > 0 && freshnessLifetime < maxAge {
return true
}
return false
}
**Fixed.**
### github.com/lucas-clemente/quic-go/flowcontrol/flow_controller.go:131
c.lastWindowUpdateTime = time.Now()
used as:
if c.lastWindowUpdateTime.IsZero() {
return
}
...
timeSinceLastWindowUpdate := time.Now().Sub(c.lastWindowUpdateTime)
**Fixed.**
### github.com/hashicorp/serf/serf/snapshot.go:327
now := time.Now()
if now.Sub(s.lastFlush) > flushInterval {
s.lastFlush = now
if err := s.buffered.Flush(); err != nil {
return err
}
}
**Fixed.**
### github.com/junegunn/fzf/src/matcher.go:210
startedAt := time.Now()
...
for matchesInChunk := range countChan {
...
if time.Now().Sub(startedAt) > progressMinDuration {
m.eventBox.Set(EvtSearchProgress, float32(count)/float32(numChunks))
}
}
**Fixed.**
### github.com/mitchellh/packer/vendor/google.golang.org/appengine/demos/helloworld/helloworld.go:19
var initTime = time.Now()
func handle(w http.ResponseWriter, r *http.Request) {
...
tmpl.Execute(w, time.Since(initTime))
}
**Fixed.**
### github.com/ncw/rclone/vendor/google.golang.org/appengine/internal/api.go:549
func (c *context) logFlusher(stop <-chan int) {
lastFlush := time.Now()
tick := time.NewTicker(flushInterval)
for {
select {
case <-stop:
// Request finished.
tick.Stop()
return
case <-tick.C:
force := time.Now().Sub(lastFlush) > forceFlushInterval
if c.flushLog(force) {
lastFlush = time.Now()
}
}
}
}
**Fixed.**
### github.com/ethereum/go-ethereum/cmd/geth/chaincmd.go:159
start := time.Now()
...
fmt.Printf("Import done in %v.\n\n", time.Since(start))
**Fixed.**
### github.com/nats-io/nats/test/conn_test.go:652
if firstDisconnect {
firstDisconnect = false
dtime1 = time.Now()
} else {
dtime2 = time.Now()
}
and later:
if (dtime1 == time.Time{}) || (dtime2 == time.Time{}) || (rtime == time.Time{}) || (atime1 == time.Time{}) || (atime2 == time.Time{}) || (ctime == time.Time{}) {
t.Fatalf("Some callbacks did not fire:\n%v\n%v\n%v\n%v\n%v\n%v", dtime1, rtime, atime1, atime2, dtime2, ctime)
}
if rtime.Before(dtime1) || dtime2.Before(rtime) || atime2.Before(atime1) || ctime.Before(atime2) {
t.Fatalf("Wrong callback order:\n%v\n%v\n%v\n%v\n%v\n%v", dtime1, rtime, atime1, atime2, dtime2, ctime)
}
**Fixed.**
### github.com/google/cadvisor/manager/container.go:456
// Schedule the next housekeeping. Sleep until that time.
if time.Now().Before(next) {
time.Sleep(next.Sub(time.Now()))
} else {
next = time.Now()
}
lastHousekeeping = next
**Fixed.**
### github.com/google/cadvisor/vendor/golang.org/x/oauth2/token.go:98
return t.Expiry.Add(-expiryDelta).Before(time.Now())
**Fixed** (see above).
### github.com/hashicorp/consul/consul/fsm.go:109
defer metrics.MeasureSince([]string{"consul", "fsm", "register"}, time.Now())
**Fixed.**
### github.com/hashicorp/vault/vendor/github.com/hashicorp/yamux/session.go:295
// Wait for a response
start := time.Now()
...
// Compute the RTT
return time.Now().Sub(start), nil
**Fixed.**
### github.com/go-kit/kit/examples/shipping/booking/instrumenting.go:31
defer func(begin time.Time) {
s.requestCount.With("method", "book").Add(1)
s.requestLatency.With("method", "book").Observe(time.Since(begin).Seconds())
}(time.Now())
**Fixed.**
### github.com/cyfdecyf/cow/timeoutset.go:22
func (ts *TimeoutSet) add(key string) {
now := time.Now()
ts.Lock()
ts.time[key] = now
ts.Unlock()
}
used by
func (ts *TimeoutSet) has(key string) bool {
ts.RLock()
t, ok := ts.time[key]
ts.RUnlock()
if !ok {
return false
}
if time.Now().Sub(t) > ts.timeout {
ts.del(key)
return false
}
return true
}
**Fixed.**
### github.com/prometheus/prometheus/vendor/k8s.io/client-go/1.5/rest/request.go:761
//Metrics for total request latency
start := time.Now()
defer func() {
metrics.RequestLatency.Observe(r.verb, r.finalURLTemplate(), time.Since(start))
}()
**Fixed.**
### github.com/ethereum/go-ethereum/p2p/discover/udp.go:383
for {
...
select {
...
case p := <-t.addpending:
p.deadline = time.Now().Add(respTimeout)
...
case now := <-timeout.C:
// Notify and remove callbacks whose deadline is in the past.
for el := plist.Front(); el != nil; el = el.Next() {
p := el.Value.(*pending)
if now.After(p.deadline) || now.Equal(p.deadline) {
...
}
}
}
}
**Fixed** assuming time channels receive monotonic times as well.
### k8s.io/heapster/metrics/sinks/manager.go:150
startTime := time.Now()
...
defer exporterDuration.
WithLabelValues(s.Name()).
Observe(float64(time.Since(startTime)) / float64(time.Microsecond))
**Fixed.**
### github.com/vmware/harbor/src/ui/auth/lock.go:43
func (ul *UserLock) Lock(username string) {
...
ul.failures[username] = time.Now()
}
used by:
func (ul *UserLock) IsLocked(username string) bool {
...
return time.Now().Sub(ul.failures[username]) <= ul.d
}
**Fixed.**
### github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go:1410
{"an hour ago", translateTimestamp(unversioned.Time{Time: time.Now().Add(-6e12)}), "1h"},
where
func translateTimestamp(timestamp unversioned.Time) string {
if timestamp.IsZero() {
return "<unknown>"
}
return shortHumanDuration(time.Now().Sub(timestamp.Time))
}
**Fixed.**
### github.com/pingcap/pd/server/kv.go:194
start := time.Now()
resp, err := clientv3.NewKV(c).Get(ctx, key, opts...)
if cost := time.Since(start); cost > kvSlowRequestTime {
log.Warnf("kv gets too slow: key %v cost %v err %v", key, cost, err)
}
**Fixed.**
### github.com/xtaci/kcp-go/sess.go:489
if interval > 0 && time.Now().After(lastPing.Add(interval)) {
...
lastPing = time.Now()
}
**Fixed.**
### github.com/go-xorm/xorm/lru_cacher.go:202
el.Value.(*sqlNode).lastVisit = time.Now()
used as
if removedNum <= core.CacheGcMaxRemoved &&
time.Now().Sub(e.Value.(*idNode).lastVisit) > m.Expired {
...
}
**Fixed.**
### github.com/openshift/origin/vendor/github.com/samuel/go-zookeeper/zk/conn.go:510
conn.SetWriteDeadline(time.Now().Add(c.recvTimeout))
**Fixed.**
### github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/client/leaderelection/leaderelection.go:236
le.observedTime = time.Now()
used as:
if le.observedTime.Add(le.config.LeaseDuration).After(now.Time) && ...
**Fixed.**
### k8s.io/heapster/events/sinks/manager.go:139
startTime := time.Now()
defer exporterDuration.
WithLabelValues(s.Name()).
Observe(float64(time.Since(startTime)) / float64(time.Microsecond))
**Fixed.**
### golang.org/x/net/ipv4/unicast_test.go:64
... p.SetReadDeadline(time.Now().Add(100 * time.Millisecond)) ...
**Fixed.**
### github.com/kelseyhightower/confd/vendor/github.com/Sirupsen/logrus/text_formatter.go:27
func init() {
baseTimestamp = time.Now()
isTerminal = IsTerminal()
}
func miniTS() int {
return int(time.Since(baseTimestamp) / time.Second)
}
**Fixed** (same as above, vendored in docker/libnetwork).
### github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver/v3_server.go:693
start := time.Now()
...
return nil, s.parseProposeCtxErr(cctx.Err(), start)
where
curLeadElected := s.r.leadElectedTime()
prevLeadLost := curLeadElected.Add(-2 * time.Duration(s.Cfg.ElectionTicks) * time.Duration(s.Cfg.TickMs) * time.Millisecond)
if start.After(prevLeadLost) && start.Before(curLeadElected) {
return ErrTimeoutDueToLeaderFail
}
All the times involved end up being monotonic, making the After/Before checks more accurate.
**Fixed.**
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/40276-go-install.md | ## Proposal: `go install` should install executables in module mode outside a module
Authors: Jay Conrod, Daniel Martí
Last Updated: 2020-09-29
Discussion at https://golang.org/issue/40276.
## Abstract
Authors of executables need a simple, reliable, consistent way for users to
build and install exectuables in module mode without updating module
requirements in the current module's `go.mod` file.
## Background
`go get` is used to download and install executables, but it's also responsible
for managing dependencies in `go.mod` files. This causes confusion and
unintended side effects: for example, the command
`go get golang.org/x/tools/gopls` builds and installs `gopls`. If there's a
`go.mod` file in the current directory or any parent, this command also adds a
requirement on the module `golang.org/x/tools/gopls`, which is usually not
intended. When `GO111MODULE` is not set, `go get` will also run in GOPATH mode
when invoked outside a module.
These problems lead authors to write complex installation commands such as:
```
(cd $(mktemp -d); GO111MODULE=on go get golang.org/x/tools/gopls)
```
## Proposal
We propose augmenting the `go install` command to build and install packages
at specific versions, regardless of the current module context.
```
go install golang.org/x/tools/gopls@v0.4.4
```
To eliminate redundancy and confusion, we also propose deprecating and removing
`go get` functionality for building and installing packages.
### Details
The new `go install` behavior will be enabled when an argument has a version
suffix like `@latest` or `@v1.5.2`. Currently, `go install` does not allow
version suffixes. When a version suffix is used:
* `go install` runs in module mode, regardless of whether a `go.mod` file is
present. If `GO111MODULE=off`, `go install` reports an error, similar to
what `go mod download` and other module commands do.
* `go install` acts as if no `go.mod` file is present in the current directory
or parent directory.
* No module will be considered the "main" module.
* Errors are reported in some cases to ensure that consistent versions of
dependencies are used by users and module authors. See Rationale below.
* Command line arguments must not be meta-patterns (`all`, `std`, `cmd`)
or local directories (`./foo`, `/tmp/bar`).
* Command line arguments must refer to main packages (executables). If a
argument has a wildcard (`...`), it will only match main packages.
* Command line arguments must refer to packages in one module at a specific
version. All version suffixes must be identical. The versions of the
installed packages' dependencies are determined by that module's `go.mod`
file (if it has one).
* If that module has a `go.mod` file, it must not contain directives that
would cause it to be interpreted differently if the module were the main
module. In particular, it must not contain `replace` or `exclude`
directives.
If `go install` has arguments without version suffixes, its behavior will not
change. It will operate in the context of the main module. If run in module mode
outside of a module, `go install` will report an error.
With these restrictions, users can install executables using consistent commands.
Authors can provide simple installation instructions without worrying about
the user's working directory.
With this change, `go install` would overlap with `go get` even more, so we also
propose deprecating and removing the ability for `go get` to install packages.
* In Go 1.16, when `go get` is invoked outside a module or when `go get` is
invoked without the `-d` flag with arguments matching one or more main
packages, `go get` would print a deprecation warning recommending an
equivalent `go install` command.
* In a later release (likely Go 1.17), `go get` would no longer build or install
packages. The `-d` flag would be enabled by default. Setting `-d=false` would
be an error. If `go get` is invoked outside a module, it would print an error
recommending an equivalent `go install` command.
### Examples
```
# Install a single executable at the latest version
$ go install example.com/cmd/tool@latest
# Install multiple executables at the latest version
$ go install example.com/cmd/...@latest
# Install at a specific version
$ go install example.com/cmd/tool@v1.4.2
```
## Current `go install` and `go get` functionality
`go install` is used for building and installing packages within the context of
the main module. `go install` reports an error when invoked outside of a module
or when given arguments with version queries like `@latest`.
`go get` is used both for updating module dependencies in `go.mod` and for
building and installing executables. `go get` also works differently depending
on whether it's invoked inside or outside of a module.
These overlapping responsibilities lead to confusion. Ideally, we would have one
command (`go install`) for installing executables and one command (`go get`) for
changing dependencies.
Currently, when `go get` is invoked outside a module in module mode (with
`GO111MODULE=on`), its primary purpose is to build and install executables. In
this configuration, there is no main module, even if only one module provides
packages named on the command line. The build list (the set of module versions
used in the build) is calculated from requirements in `go.mod` files of modules
providing packages named on the command line. `replace` or `exclude` directives
from all modules are ignored. Vendor directories are also ignored.
When `go get` is invoked inside a module, its primary purpose is to update
requirements in `go.mod`. The `-d` flag is often used, which instructs `go get`
not to build or install packages. Explicit `go build` or `go install` commands
are often better for installing tools when dependency versions are specified in
`go.mod` and no update is desired. Like other build commands, `go get` loads the
build list from the main module's `go.mod` file, applying any `replace` or
`exclude` directives it finds there. `replace` and `exclude` directives in other
modules' `go.mod` files are never applied. Vendor directories in the main module
and in other modules are ignored; the `-mod=vendor` flag is not allowed.
The motivation for the current `go get` behavior was to make usage in module
mode similar to usage in GOPATH mode. In GOPATH mode, `go get` would download
repositories for any missing packages into `$GOPATH/src`, then build and install
those packages into `$GOPATH/bin` or `$GOPATH/pkg`. `go get -u` would update
repositories to their latest versions. `go get -d` would download repositories
without building packages. In module mode, `go get` works with requirements in
`go.mod` instead of repositories in `$GOPATH/src`.
## Rationale
### Why can't `go get` clone a git repository and build from there?
In module mode, the `go` command typically fetches dependencies from a
proxy. Modules are distributed as zip files that contain sources for specific
module versions. Even when `go` connects directly to a repository instead of a
proxy, it still generates zip files so that builds work consistently no matter
how modules are fetched. Those zip files don't contain nested modules or vendor
directories.
If `go get` cloned repositories, it would work very differently from other build
commands. That causes several problems:
* It adds complication (and bugs!) to the `go` command to support a new build
mode.
* It creates work for authors, who would need to ensure their programs can be
built with both `go get` and `go install`.
* It reduces speed and reliability for users. Modules may be available on a
proxy when the original repository is unavailable. Fetching modules from a
proxy is roughly 5-7x faster than cloning git repositories.
### Why can't vendor directories be used?
Vendor directories are not included in module zip files. Since they're not
present when a module is downloaded, there's no way to build with them.
We don't plan to include vendor directories in zip files in the future
either. Changing the set of files included in module zip files would break
`go.sum` hashes.
### Why can't directory `replace` directives be used?
For example:
```
replace example.com/sibling => ../sibling
```
`replace` directives with a directory path on the right side can't be used
because the directory must be outside the module. These directories can't be
present when the module is downloaded, so there's no way to build with them.
### Why can't module `replace` directives be used?
For example:
```
replace example.com/mod v1.0.0 => example.com/fork v1.0.1-bugfix
```
It is technically possible to apply these directives. If we did this, we would
still want some restrictions. First, an error would be reported if more than one
module provided packages named on the command line: we must be able to identify
a main module. Second, an error would be reported if any directory `replace`
directives were present: we don't want to introduce a new configuration where
some `replace` directives are applied but others are silently ignored.
However, there are two reasons to avoid applying `replace` directives at all.
First, applying `replace` directives would create inconsistency for users inside
and outside a module. When a package is built within a module with `go build` or
`go install`, only `replace` directives from the main module are applied, not
the module providing the package. When a package is built outside a module with
`go get`, no `replace` directives are applied. If `go install` applied `replace`
directives from the module providing the package, it would not be consistent
with the current behavior of any other build command. To eliminate confusion
about whether `replace` directives are applied, we propose that `go install`
reports errors when encountering them.
Second, if `go install` applied `replace` directives, it would take power away
from developers that depend on modules that provide tools. For example, suppose
the author of a popular code generation tool `gogen` forks a dependency
`genutil` to add a feature. They add a `replace` directive pointing to their
fork of `genutil` while waiting for a PR to merge. A user of `gogen` wants to
track the version they use in their `go.mod` file to ensure everyone on their
team uses a consistent version. Unfortunately, they can no longer build `gogen`
with `go install` because the `replace` is ignored. The author of `gogen` might
instruct their users to build with `go install`, but then users can't track the
dependency in their `go.mod` file, and they can't apply their own `require` and
`replace` directives to upgrade or fix other transitive dependencies. The author
of `gogen` could also instruct their users to copy the `replace` directive, but
this may conflict with other `require` and `replace` directives, and it may
cause similar problems for users further downstream.
### Why report errors instead of ignoring `replace`?
If `go install` ignored `replace` directives, it would be consistent with the
current behavior of `go get` when invoked outside a module. However, in
[#30515](https://golang.org/issue/30515) and related discussions, we found that
many developers are surprised by that behavior.
It seems better to be explicit that `replace` directives are only applied
locally within a module during development and not when users build packages
from outside the module. We'd like to encourage module authors to release
versions of their modules that don't rely on `replace` directives so that users
in other modules may depend on them easily.
If this behavior turns out not to be suitable (for example, authors prefer to
keep `replace` directives in `go.mod` at release versions and understand that
they won't affect users), then we could start ignoring `replace` directives in
the future, matching current `go get` behavior.
### Should `go.sum` files be checked?
Because there is no main module, `go install` will not use a `go.sum` file to
authenticate any downloaded module or `go.mod` file. The `go` command will still
use the checksum database ([sum.golang.org](https://sum.golang.org)) to
authenticate downloads, subject to privacy settings. This is consistent with the
current behavior of `go get`: when invoked outside a module, no `go.sum` file is
used.
The new `go install` command requires that only one module may provide packages
named on the command line, so it may be logical to use that module's `go.sum`
file to verify downloads. This avoids a problem in
[#28802](https://golang.org/issue/28802), a related proposal to verify downloads
against all `go.sum` files in dependencies: the build can't be broken by one bad
`go.sum` file in a dependency.
However, using the `go.sum` from the module named on the command line only
provides a marginal security benefit: it lets us authenticate private module
dependencies (those not available to the checksum database) when the module on
the command line is public. If the module named on the command line is private
or if the checksum database isn't used, then we can't authenticate the download
of its content (including the `go.sum` file), and we must trust the proxy. If
all dependencies are public, we can authenticate all downloads without `go.sum`.
### Why require a version suffix when outside a module?
If no version suffix were required when `go install` is invoked outside a
module, then the meaning of the command would depend on whether the user's
working directory is inside a module. For example:
```
go install golang.org/x/tools/gopls
```
When invoked outside of a module, this command would run in `GOPATH` mode,
unless `GO111MODULE=on` is set. In module mode, it would install the latest
version of the executable.
When invoked inside a module, this command would use the main module's `go.mod`
file to determine the versions of the modules needed to build the package.
We currently have a similar problem with `go get`. Requiring the version suffix
makes the meaning of a `go install` command unambiguous.
### Why not a `-g` flag instead of `@latest`?
To install the latest version of an executable, the two commands below would be
equivalent:
```
go install -g golang.org/x/tools/gopls
go install golang.org/x/tools/gopls@latest
```
The `-g` flag has the advantage of being shorter for a common use case. However,
it would only be useful when installing the latest version of a package, since
`-g` would be implied by any version suffix.
The `@latest` suffix is clearer, and it implies that the command is
time-dependent and not reproducible. We prefer it for those reasons.
## Compatibility
The `go install` part of this proposal only applies to commands with version
suffixes on each argument. `go install` reports an error for these, and this
proposal does not recommend changing other functionality of `go install`, so
that part of the proposal is backward compatible.
The `go get` part of this proposal recommends deprecating and removing
functionality, so it's certainly not backward compatible. `go get -d` commands
will continue to work without modification though, and eventually, the `-d` flag
can be dropped.
Parts of this proposal are more strict than is technically necessary (for
example, requiring one module, forbidding `replace` directives). We could relax
these restrictions without breaking compatibility in the future if it seems
expedient. It would be much harder to add restrictions later.
## Implementation
An initial implementation of this feature was merged in
[CL 254365](https://go-review.googlesource.com/c/go/+/254365). Please try it
out!
## Future directions
The behavior with respect to `replace` directives was discussed extensively
before this proposal was written. There are three potential behaviors:
1. Ignore `replace` directives in all modules. This would be consistent with
other module-aware commands, which only apply `replace` directives from the
main module (defined in the current directory or a parent directory).
`go install pkg@version` ignores the current directory and any `go.mod`
file that might be present, so there is no main module.
2. Ensure only one module provides packages named on the command line, and
treat that module as the main module, applying its module `replace`
directives from it. Report errors for directory `replace` directives. This
is feasible, but it may have wider ecosystem effects; see "Why can't module
`replace` directives be used?" above.
3. Ensure only one module provides packages named on the command line, and
report errors for any `replace` directives it contains. This is the behavior
currently proposed.
Most people involved in this discussion have advocated for either (1) or (2).
The behavior in (3) is a compromise. If we find that the behavior in (1) is
strictly better than (2) or vice versa, we can switch to that behavior from
(3) without an incompatible change. Additionally, (3) eliminates
ambiguity about whether `replace` directives are applied for users and module
authors.
Note that applying directory `replace` directives is not considered here for
the reasons in "Why can't directory `replace` directives be used?".
## Appendix: FAQ
### Why not apply `replace` directives from all modules?
In short, `replace` directives from different modules would conflict, and
that would make dependency management harder for most users.
For example, consider a case where two dependencies replace the same module
with different forks.
```
// in example.com/mod/a
replace example.com/mod/c => example.com/fork-a/c v1.0.0
// in example.com/mod/b
replace example.com/mod/c => example.com/fork-b/c v1.0.0
```
Another conflict would occur where two dependencies pin different versions
of the same module.
```
// in example.com/mod/a
replace example.com/mod/c => example.com/mod/c v1.1.0
// in example.com/mod/b
replace example.com/mod/c => example.com/mod/c v1.2.0
```
To avoid the possibility of conflict, the `go` command ignores `replace`
directives in modules other than the main module.
Modules are intended to scale to a large ecosystem, and in order for upgrades
to be safe, fast, and predictable, some rules must be followed, like semantic
versioning and [import compatibility](https://research.swtch.com/vgo-import).
Not relying on `replace` is one of these rules.
### How can module authors avoid `replace`?
`replace` is useful in several situations for local or short-term development,
for example:
* Changing multiple modules concurrently.
* Using a short-term fork of a dependency until a change is merged upstream.
* Using an old version of a dependency because a new version is broken.
* Working around migration problems, like `golang.org/x/lint` imported as
`github.com/golang/lint`. Many of these problems should be fixed by lazy
module loading ([#36460](https://golang.org/issue/36460)).
`replace` is safe to use in a module that is not depended on by other modules.
It's also safe to use in revisions that aren't depended on by other modules.
* If a `replace` directive is just meant for temporary local development by one
person, avoid checking it in. The `-modfile` flag may be used to build with
an alternative `go.mod` file. See also
[#26640](https://golang.org/issue/26640) a feature request for a
`go.mod.local` file containing replacements and other local modifications.
* If a `replace` directive must be checked in to fix a short-term problem,
ensure at least one release or pre-release version is tagged before checking
it in. Don't tag a new release version with `replace` checked in (pre-release
versions may be okay, depending on how they're used). When the `go` command
looks for a new version of a module (for example, when running `go get` with
no version specified), it will prefer release versions. Tagging versions lets
you continue development on the main branch without worrying about users
fetching arbitrary commits.
* If a `replace` directive must be checked in to solve a long-term problem,
consider solutions that won't cause issues for dependent modules. If possible,
tag versions on a release branch with `replace` directives removed.
### When would `go install` be reproducible?
The new `go install` command will build an executable with the same set of
module versions on every invocation if both the following conditions are true:
* A specific version is requested in the command line argument, for example,
`go install example.com/cmd/foo@v1.0.0`.
* Every package needed to build the executable is provided by a module required
directly or indirectly by the `go.mod` file of the module providing the
executable. If the executable only imports standard library packages or
packages from its own module, no `go.mod` file is necessary.
An executable may not be bit-for-bit reproducible for other reasons. Debugging
information will include system paths (unless `-trimpath` is used). A package
may import different packages on different platforms (or may not build at all).
The installed Go version and the C toolchain may also affect binary
reproducibility.
### What happens if a module depends on a newer version of itself?
`go install` will report an error, as `go get` already does.
This sometimes happens when two modules depend on each other, and releases
are not tagged on the main branch. A command like `go get example.com/m@master`
will resolve `@master` to a pseudo-version lower than any release version.
The `go.mod` file at that pseudo-version may transitively depend on a newer
release version.
`go get` reports an error in this situation. In general, `go get` reports
an error when command line arguments different versions of the same module,
directly or indirectly. `go install` doesn't support this yet, but this should
be one of the conditions checked when running with version suffix arguments.
## Appendix: usage of replace directives
In this proposal, `go install` would report errors for `replace` directives in
the module providing packages named on the command line. `go get` ignores these,
but the behavior may still surprise module authors and users. I've tried to
estimate the impact on the existing set of open source modules.
* I started with a list of 359,040 `main` packages that Russ Cox built during an
earlier study.
* I excluded packages with paths that indicate they were homework, examples,
tests, or experiments. 187,805 packages remained.
* Of these, I took a random sample of 19,000 packages (about 10%).
* These belonged to 13,874 modules. For each module, I downloaded the "latest"
version `go get` would fetch.
* I discarded repositories that were forks or couldn't be retrieved. 10,618
modules were left.
* I discarded modules that didn't have a `go.mod` file. 4,519 were left.
* Of these:
* 3982 (88%) don't use `replace` at all.
* 71 (2%) use directory `replace` only.
* 439 (9%) use module `replace` only.
* 27 (1%) use both.
* In the set of 439 `go.mod` files using module `replace` only, I tried to
classify why `replace` was used. A module may have multiple `replace`
directives and multiple classifications, so the percentages below don't add
to 100%.
* 165 used `replace` as a soft fork, for example, to point to a bug fix PR
instead of the original module.
* 242 used `replace` to pin a specific version of a dependency (the module
path is the same on both sides).
* 77 used `replace` to rename a dependency that was imported with another
name, for example, replacing `github.com/golang/lint` with the correct path,
`golang.org/x/lint`.
* 30 used `replace` to rename `golang.org/x` repos with their
`github.com/golang` mirrors.
* 11 used `replace` to bypass semantic import versioning.
* 167 used `replace` with `k8s.io` modules. Kubernetes has used `replace` to
bypass MVS, and dependent modules have been forced to do the same.
* 111 modules contained `replace` directives I couldn't automatically
classify. The ones I looked at seemed to mostly be forks or pins.
The modules I'm most concerned about are those that use `replace` as a soft fork
while submitting a bug fix to an upstream module; other problems have other
solutions that I don't think we need to design for here. Modules using soft fork
replacements are about 4% of the the modules with `go.mod` files I sampled (165
/ 4519). This is a small enough set that I think we should move forward with the
proposal above.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/28221-go2-transitions.md | # Proposal: Go 2 transition
Author: Ian Lance Taylor
Last update: October 15, 2018
## Abstract
A proposal for how to make incompatible changes from Go 1 to Go 2
while breaking as little as possible.
## Background
Currently the Go language and standard libraries are covered by the
[Go 1 compatibility guarantee](https://golang.org/doc/go1compat).
The goal of that document was to promise that new releases of Go would
not break existing working programs.
Among the goals for the Go 2 process is to consider changes to the
language and standard libraries that will break the guarantee.
Since Go is used in a distributed open source environment, we cannot
rely on a [flag
day](http://www.catb.org/jargon/html/F/flag-day.html).
We must permit the interoperation of different packages written using
different versions of Go.
Every language goes through version transitions.
As background, here are some notes on what other languages have done.
Feel free to skip the rest of this section.
### C
C language versions are driven by the ISO standardization process.
C language development has paid close attention to backward
compatibility.
After the first ISO standard, C90, every subsequent standard has
maintained strict backward compatibility.
Where new keywords have been introduced, they are introduced in a
namespace reserved by C90 (an underscore followed by an uppercase
ASCII letter) and are made more accessible via a `#define` macro in a
header file that did not previously exist (examples are `_Complex`,
defined as `complex` in `<complex.h>`, and `_Bool`, defined as `bool`
in `<stdbool.h>`).
None of the basic language semantics defined in C90 have changed.
In addition, most C compilers provide options to define precisely
which version of the C standard the code should be compiled for (for
example, `-std=c90`).
Most standard library implementations support feature macros that may
be #define’d before including the header files to specify exactly
which version of the library should be provided (for example,
`_ISOC99_SOURCE`).
While these features have had bugs, they are fairly reliable and are
widely used.
A key feature of these options is that code compiled at different
language/library versions can in general all be linked together and
work as expected.
The first standard, C90, did introduce breaking changes to the
previous C language implementations, known informally as K&R C.
New keywords were introduced, such as `volatile` (actually that might
have been the only new keyword in C90).
The precise implementation of integer promotion in integer expressions
changed from unsigned-preserving to value-preserving.
Fortunately it was easy to detect code using the new keywords due to
compilation errors, and easy to adjust that code.
The change in integer promotion actually made it less surprising to
naive users, and experienced users mostly used explicit casts to
ensure portability among systems with different integer sizes, so
while there was no automatic detection of problems not much code broke
in practice.
There were also some irritating changes.
C90 introduced trigraphs, which changed the behavior of some string
constants.
Compilers adapted with options like -no-trigraphs and -Wtrigraphs.
More seriously, C90 introduced the notion of undefined behavior, and
declared that programs that invoked undefined behavior might take
any action.
In K&R C, the cases that C90 described as undefined behavior were
mostly treated as what C90 called implementation-defined behavior: the
program would take some non-portable but predictable action.
Compiler writers absorbed the notion of undefined behavior, and
started writing optimizations that assumed that the behavior would not
occur.
This caused effects that surprised people not fluent in the C
standard.
I won’t go into the details here, but one example of this (from my
blog) is [signed overflow](http://www.airs.com/blog/archives/120).
C of course continues to be the preferred language for kernel
development and the glue language of the computing industry.
Though it has been partially replaced by newer languages, this is not
because of any choices made by new versions of C.
The lessons I see here are:
* Backward compatibility matters.
* Breaking compatibility in small ways is OK, as long as people can
spot the breakages through compiler options or compiler errors.
* Compiler options to select specific language/library versions are
useful, provided code compiled using different options can be linked
together.
* Unlimited undefined behavior is confusing for users.
### C++
C++ language versions are also now driven by the ISO standardization process.
Like C, C++ pays close attention to backward compatibility.
C++ has been historically more free with adding new keywords (there
are 10 new keywords in C++11).
This works out OK because the newer keywords tend to be relatively
long (`constexpr`, `nullptr`, `static_assert`) and compilation errors
make it easy to find code using the new keywords as identifiers.
C++ uses the same sorts of options for specifying the standard version
for language and libraries as are found in C.
It suffers from the same sorts of problems as C with regard to
undefined behavior.
An example of a breaking change in C++ was the change in the scope of
a variable declared in the initialization statement of a for loop.
In the pre-standard versions of C++, the scope of the variable
extended to the end of the enclosing block, as though it were declared
immediately before the for loop.
During the development of the first C++ standard, C++98, this was
changed so that the scope was only within the for loop itself.
Compilers adapted by introducing options like `-ffor-scope` so that
users could control the expected scope of the variable (for a period
of time, when compiling with neither `-ffor-scope` nor
`-fno-for-scope`, the GCC compiler used the old scope but warned about
any code that relied on it).
Despite the relatively strong backward compatibility, code written in
new versions of C++, like C++11, tends to have a very different feel
than code written in older versions of C++.
This is because styles have changed to use new language and library
features.
Raw pointers are less commonly used, range loops are used rather than
standard iterator patterns, new concepts like rvalue references and
move semantics are used widely, and so forth.
People familiar with older versions of C++ can struggle to understand
code written in new versions.
C++ is of course an enormously popular language, and the ongoing
language revision process has not harmed its popularity.
Besides the lessons from C, I would add:
* A new version may have a very different feel while remaining
backward compatible.
### Java
I know less about Java than about the other languages I discuss, so
there may be more errors here and there are certainly more biases.
Java is largely backward compatible at the byte-code level, meaning
that Java version N+1 libraries can call code written in, and
compiled by, Java version N (and N-1, N-2, and so forth).
Java source code is also mostly backward compatible, although they do
add new keywords from time to time.
The Java documentation is very detailed about potential compatibility
issues when moving from one release to another.
The Java standard library is enormous, and new packages are added at
each new release.
Packages are also deprecated from time to time.
Using a deprecated package will cause a warning at compile time (the
warning may be turned off), and after a few releases the deprecated
package will be removed (at least in theory).
Java does not seem to have many backward compatibility problems.
The problems are centered on the JVM: an older JVM generally will not
run newer releases, so you have to make sure that your JVM is at least
as new as that required by the newest library you want to use.
Java arguably has something of a forward compatibility problem in
that JVM bytecodes present a higher level interface than that of a
CPU, and that makes it harder to introduce new features that cannot
be directly represented using the existing bytecodes.
This forward compatibility problem is part of the reason that Java
generics use type erasure.
Changing the definition of existing bytecodes would have broken
existing programs that had already been compiled into bytecode.
Extending bytecodes to support generic types would have required a
large number of additional bytecodes to be defined.
This forward compatibility problem, to the extent that it is a
problem, does not exist for Go.
Since Go compiles to machine code, and implements all required run
time checks by generating additional machine code, there is no similar
forward compatibility issue.
But, in general:
* Be aware of how compatibility issues may restrict future changes.
### Python
Python 3.0 (also known as Python 3000) started development in 2006 and
was initially released in 2008.
In 2018 the transition is still incomplete.
Some people continue to use Python 2.7 (released in 2010).
This is not a path we want to emulate for Go 2.
The main reason for this slow transition appears to be lack of
backward compatibility.
Python 3.0 was intentionally incompatible with earlier versions of
Python.
Notably, `print` was changed from a statement to a function, and
strings were changed to use Unicode.
Python is often used in conjunction with C code, and the latter change
meant that any code that passed strings from Python to C required
tweaking the C code.
Because Python is an interpreted language, and because there is no
backward compatibility, it is impossible to mix Python 2 and Python
3 code in the same program.
This means that for a typical program that uses a range of libraries,
each of those libraries must be converted to Python 3 before the
program can be converted.
Since programs are in various states of conversion, libraries must
support Python 2 and 3 simultaneously.
Python supports statements of the form `from __future__ import
FEATURE`.
A statement like this changes the interpretation of the rest of the
file in some way.
For example, `from __future__ import print_function` changes `print`
from a statement (as in Python 2) to a function (as in Python 3).
This can be used to take incremental steps toward new language
versions, and to make it easier to share the same code among different
language versions.
So, we knew it already, but:
* Backward compatibility is essential.
* Compatibility of the interface to other languages is important.
* Upgrading to a new version is limited by the version that your
libraries support.
### Perl
The Perl 6 development process began in 2000.
The first stable version of the Perl 6 spec was announced in 2015.
This is not a path we want to emulate for Go 2.
There are many reasons for this slow path.
Perl 6 was intentionally not backward compatible: it was meant to fix
warts in the language.
Perl 6 was intended to be represented by a spec rather than, as with
previous versions of Perl, an implementation.
Perl 6 started with a set of change proposals, but then continued to
evolve over time, and then evolve some more.
Perl supports `use feature` which is similar to Python's `from
__future__ import`.
It changes the interpretation of the rest of the file to use a
specified new language feature.
* Don’t be Perl 6.
* Set and meet deadlines.
* Don’t change everything at once.
## Proposal
### Language changes
Pedantically speaking, we must have a way to speak about specific
language versions.
Each change to the Go language first appears in a Go release.
We will use Go release numbers to define language versions.
That is the only reasonable choice, but it can be confusing because
standard library changes are also associated with Go release numbers.
When thinking about compatibility, it will be necessary to
conceptually separate the Go language version from the standard
library version.
As an example of a specific change, type aliases were first available
in Go language version 1.9.
Type aliases were an example of a backward compatible language change.
All code written in Go language versions 1.0 through 1.8 continued to
work the same way with Go language 1.9.
Code using type aliases requires Go language 1.9 or later.
#### Language additions
Type aliases are an example of an addition to the language.
Code using the type alias syntax `type A = B` did not compile with Go
versions before 1.9.
Type aliases, and other backward compatible changes since Go 1.0, show
us that for additions to the language it is not necessary for packages
to explicitly declare the minimum language version that they require.
Some packages changed to use type aliases.
When such a package was compiled with Go 1.8 tools, the package failed
to compile.
The package author can simply say: upgrade to Go 1.9, or downgrade to
an earlier version of the package.
None of the Go tools need to know about this requirement; it's implied
by the failure to compile with older versions of the tools.
It's true of course that programmers need to understand language
additions, but the the tooling does not.
Neither the Go 1.8 tools nor the Go 1.9 tools need to explicitly know
that type aliases were added in Go 1.9, other than in the limited
sense that the Go 1.9 compiler will compile type aliases and the Go
1.8 compiler will not.
That said, the possibility of specifying a minimum language version to
get better error messages for unsupported language features is
discussed below.
#### Language removals
We must also consider language changes that simply remove features
from the language.
For example, [issue 3939](http://golang.org/issue/3939) proposes that
we remove the conversion `string(i)` for an integer value `i`.
If we make this change in, say, Go version 1.20, then packages that
use this syntax will stop compiling in Go 1.20.
(If you prefer to restrict backward incompatible changes to new major
versions, then replace 1.20 by 2.0 in this discussion; the problem
remains the same.)
In this case, packages using the old syntax have no simple recourse.
While we can provide tooling to convert pre-1.20 code into working
1.20 code, we can't force package authors to run those tools.
Some packages may be unmaintained but still useful.
Some organizations may want to upgrade to 1.20 without having to
requalify the versions of packages that they rely on.
Some package authors may want to use 1.20 even though their packages
now break, but do not have time to modify their package.
These scenarios suggest that we need a mechanism to specify the
maximum version of the Go language with which a package can be built.
Importantly, specifying the maximum version of the Go language should
not be taken to imply the maximum version of the Go tools.
The Go compiler released with Go version 1.20 must be able to build
packages using Go language 1.19.
This can be done by adding an option to cmd/compile (and, if
necessary, cmd/asm and cmd/link) along the lines of the `-std` option
supported by C compilers.
When cmd/compile sees the option, perhaps `-lang=go1.19`, it will
compile the code using the Go 1.19 syntax.
This requires cmd/compile to support all previous versions, one way or
another.
If supporting old syntaxes proves to be troublesome, the `-lang`
option could perhaps be implemented by passing the code through a
convertor from the old version to the current.
That would keep support of old versions out of cmd/compile proper, and
the convertor could be useful for people who want to update their
code.
But it is unlikely that supporting old language versions will be a
significant problem.
Naturally, even though the package is built with the language version
1.19 syntax, it must in other respects be a 1.20 package: it must link
with 1.20 code, be able to call and be called by 1.20 code, and so
forth.
The go tool will need to know the maximum language version so that it
knows how to invoke cmd/compile.
Assuming we continue with the modules experiment, the logical place
for this information is the go.mod file.
The go.mod file for a module M can specify the maximum language
version for the packages that it defines.
This would be honored when M is downloaded as a dependency by some
other module.
The maximum language version is not a minimum language version.
If a module require features in language 1.19, but can be built with
1.20, we can say that the maximum language version is 1.20.
If we build with Go release 1.19, we will see that we are at less than
the maximum, and simply build with language version 1.19.
Maximum language versions greater than that supported by the current
tools can simply be ignored.
If we later build with Go release 1.21, we will build the module with
`-lang=go1.20`.
This means that the tools can set the maximum language version
automatically.
When we use Go release 1.30 to release a module, we can mark the
module as having maximum language version 1.30.
All users of the module will see this maximum version and do the right
thing.
This implies that we will have to support old versions of the language
indefinitely.
If we remove a language feature after version 1.25, version 1.26 and
all later versions will still have to support that feature if invoked
with the `-lang=go1.25` option (or `-lang=go1.24` or any other earlier
version in which the feature is supported).
Of course, if no `-lang` option is used, or if the option is
`-lang=go1.26` or later, the feature will not be available.
Since we do not expect wholesale removals of existing language
features, this should be a manageable burden.
I believe that this approach suffices for language removals.
#### Minimum language version
For better error messages it may be useful to permit the module file
to specify a minimum language version.
This is not required: if a module uses features introduced in
language version 1.N, then building it with 1.N-1 will fail at compile
time.
This may be confusing, but in practice it will likely be obvious what
the problem is.
That said, if modules can specify a minimum language version, the go
tool could produce an immediate, clear error message when building
with 1.N-1.
The minimum language version could potentially be set by the compiler
or some other tool.
When compiling each file, see which features it uses, and use that to
determine the minimum version.
It need not be precisely accurate.
This is just a suggestion, not a requirement.
It would likely provide a better user experience as the language
changes.
#### Language redefinitions
The Go language can also change in ways that are not additions or
removals, but are instead changes to the way a specific language
construct works.
For example, in Go 1.1 the size of the type `int` on 64-bit hosts
changed from 32 bits to 64 bits.
This change was relatively harmless, as the language does not specify
the exact size of `int`.
Potentially, though, some Go 1.0 programs continued to compile with Go
1.1 but stopped working.
A redefinition is a case where we have code that compiles successfully
with both versions 1.N and version 1.M, where M > N, and where the
meaning of the code is different in the two versions.
For example, [issue 20733](https://golang.org/issue/20733) proposes
that variables in a range loop should be redefined in each iteration.
Though in practice this change seems more likely to fix programs than
to break them, in principle this change might break working programs.
Note that a new keyword normally cannot cause a redefinition, though
we must be careful to ensure that that is true before introducing
one.
For example, if we introduce the keyword `check` as suggested in [the
error handling draft
design](https://go.googlesource.com/proposal/+/master/design/go2draft-error-handling.md),
and we permit code like `check(f())`, that might seem to be a
redefinition if `check` is defined as a function in the same package.
But after the keyword is introduced, any attempt to define such a
function will fail.
So it is not possible for code using `check`, under whichever meaning,
to compile with both version 1.N and 1.M.
The new keyword can be handled as a removal (of the non-keyword use of
`check`) and an addition (of the keyword `check`).
In order for the Go ecosystem to survive a transition to Go 2, we must
minimize these sorts of redefinitions.
As discussed earlier, successful languages have generally had
essentially no redefinitions beyond a certain point.
The complexity of a redefinition is, of course, that we can no longer
rely on the compiler to detect the problem.
When looking at a redefined language construct, the compiler cannot
know which meaning is meant.
In the presence of redefined language constructs, we cannot determine
the maximum language version.
We don't know if the construct is intended to be compiled with the old
meaning or the new.
The only possibility would be to let programmers set the language
version.
In this case it would be either a minimum or maximum language
version, as appropriate.
It would have to be set in such a way that it would not be
automatically updated by any tools.
Of course, setting such a version would be error prone.
Over time, a maximum language version would lead to surprising
results, as people tried to use new language features, and failed.
I think the only feasible safe approach is to not permit language
redefinitions.
We are stuck with our current semantics.
This doesn't mean we can't improve them.
For example, for [issue 20733](https://golang.org/issue/20733), the
range issue, we could change range loops so that taking the address of
a range parameter, or referring to it from a function literal, is
forbidden.
This would not be a redefinition; it would be a removal.
That approach might eliminate the bugs without the potential of
breaking code unexpectedly.
#### Build tags
Build tags are an existing mechanism that can be used by programs to
choose which files to compile based on the release.
Build tags name release versions, which look just like language
versions, but, speaking pedantically, are different.
In the discussion above we've talked about using Go release 1.N to
compile code with language version 1.N-1.
That is not possible using build tags.
Build tags can be used to set the maximum or a minimum release, or
both, that will be used to compile a specific file.
They can be a convenient way to take advantage of language changes
that are only available after a certain version; that is, they can be
used to set a minimum language version when compiling a file.
As discussed above, though, what is most useful for language changes
is the ability to set a maximum language version.
Build tags don't provide that in a useful way.
If you use a build tag to set your current release version as your
maximum version, your package will not build with later releases.
Setting a maximum language version is only possible when it is set to
a version before the current release, and is coupled with an alternate
implementation that is used for the later versions.
That is, if you are building with 1.N, it's not helpful to use a build
tag of `!1.N+1`.
You could use a build tag of `!1.M` where `M < N`, but in almost all
cases you will then need a separate file with a build tag of `1.M+1`.
Build tags can be used to handle language redefinitions: if there is a
language redefinition at language version `1.N`, programmers can write
one file with a build tag of `!1.N` using the old semantics and a
different file with a build tag of `1.N` using the new semantics.
However, these duplicate implementations are a lot of work, it's hard
to know in general when it is required, and it would be easy to make a
mistake.
The availability of build tags is not enough to overcome the earlier
comments about not permitting any language redefinitions.
#### import "go2"
It would be possible to add a mechanism to Go similar to Python's
`from __future__ import` and Perl's `use feature`.
For example, we could use a special import path, such as `import
"go2/type-aliases"`.
This would put the required language features in the file that uses
them, rather than hidden away in the go.mod file.
This would provide a way to describe the set of language additions
required by the file.
It's more complicated, because instead of relying on a language
version, the language is broken up into separate features.
There is no obvious way to ever remove any of these special imports,
so they will tend to accumulate over time.
Python and Perl avoid the accumulation problem by intentionally making
a backward incompatible change.
After moving to Python 3 or Perl 6, the accumulated feature requests
can be discarded.
Since Go is trying to avoid a large backward incompatible change,
there would be no clear way to ever remove these imports.
This mechanism does not address language removals.
We could introduce a removal import, such as `import
"go2/no-int-to-string"`, but it's not obvious why anyone would ever
use it.
In practice, there would be no way to ever remove language features,
even ones that are confusing and error-prone.
This kind of approach doesn't seem suitable for Go.
### Standard library changes
One of the benefits of a Go 2 transition is the chance to release some
of the standard library packages from the Go 1 compatibility
guarantee.
Another benefit is the chance to move many, perhaps most, of the
packages out of the six month release cycle.
If the modules experiment works out it may even be possible to start
doing this sooner rather than later, with some packages on a faster
cycle.
I propose that the six month release cycle continue, but that it be
treated as a compiler/runtime release cycle.
We want Go releases to be useful out of the box, so releases will
continue to include the current versions of roughly the same set of
packages that they contain today.
However, many of those packages will actually be run on their own
release cycles.
People using a given Go release will be able to explicitly choose to
use newer versions of the standard library packages.
In fact, in some cases they may be able to use older versions of the
standard library packages where that seems useful.
Different release cycles would require more resources on the part of
the package maintainers.
We can only do this if we have enough people to manage it and enough
testing resources to test it.
We could also continue using the six month release cycle for
everything, but make the separable packages available separately for
use with different, compatible, releases.
#### Core standard library
Still, some parts of the standard library must be treated as core
libraries.
These libraries are closely tied to the compiler and other tools, and
must strictly follow the release cycle.
Neither older nor newer versions of these libraries may be used.
Ideally, these libraries will remain on the current version 1.
If it seems necessary to change any of them to version 2, that will
have to be discussed on a case by case basis.
At this time I see no reason for it.
The tentative list of core libraries is:
* os/signal
* plugin
* reflect
* runtime
* runtime/cgo
* runtime/debug
* runtime/msan
* runtime/pprof
* runtime/race
* runtime/tsan
* sync
* sync/atomic
* testing
* time
* unsafe
I am, perhaps optimistically, omitting the net, os, and syscall
packages from this list.
We'll see what we can manage.
#### Penumbra standard library
The penumbra standard library consists of those packages that are
included with a release but are maintained independently.
This will be most of the current standard library.
These packages will follow the same discipline as today, with the
option to move to a v2 where appropriate.
It will be possible to use `go get` to upgrade or, possibly, downgrade
these standard library packages.
In particular, fixes can be made as minor releases separately from the
six month core library release cycle.
The go tool will have to be able to distinguish between the core
library and the penumbra library.
I don't know precisely how this will work, but it seems feasible.
When moving a standard library package to v2, it will be essential to
plan for programs that use both v1 and v2 of the package.
Those programs will have to work as expected, or if that is impossible
will have to fail cleanly and quickly.
In some cases this will involve modifying the v1 version to use an
internal package that is also shared by the v2 package.
Standard library packages will have to compile with older versions of
the language, at least the two previous release cycles that we
currently support.
#### Removing packages from the standard library
The ability to support `go get` of standard library packages will
permit us to remove packages from the releases.
Those packages will continue to exist and be maintained, and people
will be able to retrieve them if they need them.
However, they will not be shipped by default with a Go release.
This will include packages like
* index/suffixarray
* log/syslog
* net/http/cgi
* net/http/fcgi
and perhaps other packages that do not seem to be widely useful.
We should in due course plan a deprecation policy for old packages, to
move these packages to a point where they are no longer maintained.
The deprecation policy will also apply to the v1 versions of packages
that move to v2.
Or this may prove to be too problematic, and we should never deprecate
any existing package, and never remove them from the standard
releases.
## Go 2
If the above process works as planned, then in an important sense
there never will be a Go 2.
Or, to put it a different way, we will slowly transition to new
language and library features.
We could at any point during the transition decide that now we are
Go 2, which might be good marketing.
Or we could just skip it (there has never been a C 2.0, why have a Go
2.0?).
Popular languages like C, C++, and Java never have a version 2.
In effect, they are always at version 1.N, although they use different
names for that state.
I believe that we should emulate them.
In truth, a Go 2 in the full sense of the word, in the sense of an
incompatible new version of the language or core libraries, would not
be a good option for our users.
A real Go 2 would, perhaps unsurprisingly, be harmful.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/16339-alias-decls.md | # Proposal: Alias declarations for Go
Authors: Robert Griesemer & Rob Pike.
Last updated: July 18, 2016
Discussion at https://golang.org/issue/16339.
## Abstract
We propose to add alias declarations to the Go language. An alias declaration
introduces an alternative name for an object (type, function, etc.) declared
elsewhere. Alias declarations simplify splitting up packages because clients
can be updated incrementally, which is crucial for large-scale refactoring.
They also facilitate multi-package "components" where a top-level package
is used to provide a component's public API with aliases referring to the
componenent's internal packages. Alias declarations are
important for the Go implementation of the "import public" feature of Google
protocol buffers. They also provide a more fine-grained and explicit
alternative to "dot-imports".
## 1. Motivation
Suppose we have a library package L and a client package C that depends on L.
During refactoring of code, some functionality of L is moved into a new
package L1, which in turn may require updates to C. If there are multiple
clients C1, C2, ..., many of these clients may need to be updated
simultaneously for the system to build. Failing to do so will lead to build
breakages in a continuous build environment.
This is a real issue in large-scale systems such as we find at Google because
the number of dependencies can go into the hundreds if not thousands. Client
packages may be under control of different teams and evolve at different
speeds. Updating a large number of client packages simultaneously may be close
to impossible. This is an effective barrier to system evolution and maintenance.
If client packages can be updated incrementally, one package (or a small batch
of packages) at a time, the problem is avoided. For instance, after moving
functionality from L into L1, if it is possible for clients to continue to
refer to L in order to get the features in L1, clients don’t need to be
updated at once.
Go packages export constants, types (incl. associated methods), variables, and
functions. If a constant X is moved from a package L to L1, L may trivially
depend on L1 and re-export X with the same value as in L1.
```
package L
import "L1"
const X = L1.X // X is effectively an alias for L1.X
```
Client packages may use L1.X or continue to refer to L.X and still build
without issues. A similar work-around exists for functions: Package L may
provide wrapper functions that simply invoke the corresponding functions
in L1. Alternatively, L may define variables of function type which are
initialized to the functions which moved from L to L1:
```
package L
import "L1"
var F = L1.F // F is a function variable referring to L1.F
func G(args…) Result { return L1.G(args…) }
```
It gets more complicated for variables: An incremental approach still exists
but it requires multiple steps. Let’s assume we want to move a variable V
from L to L1. In a first step, we declare a pointer variable Vptr in L1
pointing to L.V:
```
package L1
import "L"
var Vptr = &L.V
```
Now we can incrementally update clients referring to L.V such that they use
(\*L1.Vptr) instead. This will give them full access to the same variable.
Once all references to L.V have been changed, L.V can move to L1; this step
doesn’t require any changes to clients of L1 (though it may require additional
internal changes in L and L1):
```
package L1
import "L"
var Vptr = &V
var V T = ...
```
Finally, clients may be incrementally updated again to use L1.V directly after
which we can get rid of Vptr.
There is no work-around for types, nor is possible to define a named type T in
L1 and re-export it in L and have L.T mean the exact same type as L1.T.
Discussion: The multi-step approach to factor out exported variables requires
careful planning. For instance, if we want to move both a function F and a
variable V from L to L1, we cannot do so at the same time: The forwarder F
left in L requires L to import L1, and the pointer variable Vptr introduced
in L1 requires L1 to import L. The consequence would be a forbidden import
cycle. Furthermore, if a moved function F requires access to a yet unmoved V,
it would also cause a cyclic import. Thus, variables will have to be moved
first in such a scenario, requiring multiple steps to enable incremental
client updates, followed by another round of incremental updates to move
everything else.
## 2. Alias declarations
To address these issues with a single, unified mechanism, we propose a new
form of declaration in Go, called an alias declaration. As the name suggests,
an alias declaration introduces an alternative name for a given object that
has been declared elsewhere, in a different package.
An alias declaration in package L makes it possible to move the original
declaration of an object X (a constant, type, variable, or function) from
package L to L1, while continuing to define and export the name X in L.
Both L.X and L1.X denote the exact same object (L1.X).
Note that the two predeclared types byte and rune are aliases for the
predeclared types uint8 and int32. Alias declarations will enable users
to define their own aliases, similar to byte and rune.
## 3. Notation
The existing declaration syntax for constants effectively permits
constant aliases:
```
const C = L1.C // C is effectively an alias for L1.C
```
Ideally we would like to extend this syntax to other declarations
and give it alias semantics:
```
type T = L1.T // T is an alias for L1.T
func F = L1.F // F is an alias for L1.F
```
Unfortunately, this notation breaks down for variables, because it already
has a given (and different) meaning in variable declarations:
```
var V = L1.V // V is initialized to L1.V
```
Instead of "=" we propose the new alias operator "=>" to solve the
syntactic issue:
```
const C => L1.C // for regularity only, same effect as const C = L1.C
type T => L1.T // T is an alias for type L1.T
var V => L1.V // V is an alias for variable L1.V
func F => L1.F // F is an alias for function L1.F
```
With that, a general alias specification is of the form:
```
AliasSpec = identifier "=>" PackageName "." identifier .
```
Per the discussion at https://golang.org/issue/16339, and based on feedback
from adonovan@golang, to avoid abuse, alias declarations may refer to imported
and package-qualified objects only (no aliases to local objects or
"dot-imports").
Furthermore, they are only permitted at the top (package) level,
not inside a function.
These restriction do not hamper the utility of aliases for the intended
use cases. Both restrictions can be trivially lifted later if so desired;
we start with them out of an abundance of caution.
An alias declaration may refer to another alias.
The LHS identifier (C, T, V, and F in the examples above) in an alias
declaration is called the _alias name_ (or _alias_ for short). For each alias
name there is an _original name_ (or _original_ for short), which is the
non-alias name declared for a given object (e.g., L1.T in the example above).
Some more examples:
```
import "oldp"
var v => oldp.V // local alias, not exported
// alias declarations may be grouped
type (
T1 => oldp.T1 // original for T1 is oldp.T1
T2 => oldp.T2 // original for T2 is oldp.T2
T3 [8]byte // regular declaration may be grouped with aliases
)
var V2 T2 // same effect as: var V2 oldp.T2
func myF => oldp.F // local alias, not exported
func G => oldp.G
type T => oldp.MuchTooLongATypeName
func f() {
x := T{} // same effect as: x := oldp.MuchTooLongATypeName{}
...
}
```
The respective syntactic changes in the language spec are small and
concentrated. Each declaration specification (ConstSpec, TypeSpec, etc.)
gets a new alternative which is an alias specification (AliasSpec).
Grouping is possible as before, except for functions (as before).
See Appendix A1 for details.
The short variable declaration form (using ":=") cannot be used to
declare an alias.
Discussion: Introducing a new operator ("=>") has the advantage of not
needing to introduce a new keyword (such as "alias"), which we can't really
do without violating the Go 1 promise (though r@golang and rsc@golang observe
that it would be possible to recognize "alias" as a keyword at the package-
level only, when in const/type/var/func position, and as an identifier
otherwise, and probably not break existing code).
The token sequence "=" ">" (or "==" ">") is not a valid sequence in a Go
program since ">" is a binary operator that must be surrounded by operands,
and the left operand cannot end in "=" or "==". Thus, it is safe to introduce
"=>" as a new token sequence without invalidating existing programs.
As proposed, an alias declaration must specify what kind of object the alias
refers to (const, type, var, or func). We believe this is an advantage:
It makes it clear to a user what the alias denotes (as with existing
declarations). It also makes it possible to report an error at the location
of the alias declaration if the aliased object changes (e.g., from being a
constant to a variable) rather than only at where the alias is used.
On the other hand, mdempsky@golang points out that using a keyword would
permit making changes in a package L1, say change a function F into a type F,
and not require a respective update of any alias declarations referring to
L1.F, which in turn might simplify refactoring. Specifically, one could
generalize import declarations so that they can be used to import and rename
specific objects. For instance:
```
import Printf = fmt.Printf
```
or
```
import Printf fmt.Printf
```
One might even permit the form
```
import context.Context
```
as a shorthand for
```
import Context context.Context
```
analogously to the renaming feature available to imports already. One of the
issues to consider here is that imported packages end up in the file scope and
are only visible in one file. Furthermore, currently they cannot be
re-exported. It is crucial for aliases to be re-exportable. Thus alias imports
would need to end up in package scope. (It would be odd if they ended up in
file scope: the same alias may have to be imported in multiple files of the
same package, possibly with different names.)
The choice of token ("=>") is somewhat arbitrary, but both "A => B" and
"A -> B" conjure up the image of a reference or forwarding from A to B.
The token "->" is also used in Unix directory listings for symbolic links,
where the lhs is another name (an alias) for the file mentioned on the RHS.
dneil@golang and r@golang observe that if "->" is written "in reverse" by
mistake, a declaration "var X -> p.X" meant to be an alias declaration is
close to a regular variable declaration "var X <-p.X" (with a missing "=");
though it wouldn’t compile.
Many people expressed a preference for "=>" over "->" on the tracking issue.
The argument is that "->" is more easily confused with a channel operation.
A few people would like to use "@" (as in _@lias_). For now we proceed with
"=>" - the token is trivially changed down the road if there is strong general
sentiment or a convincing argument for any other notation.
## 3. Semantics and rules
An alias declaration declares an alternative name, the alias, for a constant,
type, variable, or function, referred to by the RHS of the alias declaration.
The RHS must be a package-qualified identifier; it may itself be an alias, or
it may be the original name for the aliased object.
Alias cycles are impossible by construction since aliases must refer to fully
package-qualified (imported) objects and package import cycles are not
permitted.
An alias denotes the aliased object, and the effect of using an alias is
indistinguishable from the effect of using the original; the only visible
difference is the name.
An alias declaration may only appear at the top- (package-) level where it
is valid to have a keyword-based constant, type, variable, or function
declaration. Alias declarations may be grouped.
The same scope and export rules (capitalization for export) apply as for all
other identifiers at the top-level.
The scope of an alias identifier at the top-level is the package block
(as is the case for an identifier denoting a constant, type, variable,
or function).
An alias declaration may refer to unsafe.Pointer, but not to any of the unsafe
functions.
A package is considered "used" if any imported object of a package is used.
Consequently, declaring an alias referring to an object of an package marks
the package as used.
Discussion: The original proposal permitted aliases to any (even local)
objects and also to predeclared types in the Universe scope. Furthermore,
it permitted alias declarations inside functions. See the tracking issue
and earlier versions of this document for a more detailed discussion.
## 4. Impact on other libraries and tools
Alias declarations are a source-level and compile-time feature, with no
observable impact at run time. Thus, libraries and tools operating at the
source level or involved in type checking and compilation are expected to
need adjustments.
reflect package
The reflect package permits access to values and their types at run-time.
There’s no mechanism to make a new reflect.Value from a type name, only from
a reflect.Type. The predeclared aliases byte and rune are mapped to uint8 and
int32 already, and we would expect the same to be true for general aliases.
For instance:
```
fmt.Printf("%T", rune(0))
```
prints the original type name int32, not rune. Thus, we expect no API or
semantic changes to package reflect.
go/\* std lib packages
The packages under the go/\* std library tree which deal with source code will
need to be adjusted. Specifically, the packages go/token, go/scanner, go/ast,
go/parser, go/doc, and go/printer will need the necessary API extensions and
changes to cope with the new syntax. These changes should be straightforward.
Package go/types will need to understand how to type-check alias declarations.
It may also require an extension to its API (to be explored).
We don’t expect any changes to the go/build package.
go doc
The go doc implementation will need to be adjusted: It relies on package go/doc
which now exposes alias declarations. Thus, godoc needs to have a meaningful
way to show those as well. This may be a simple extension of the existing
machinery to include alias declarations.
Other tools operating on source code
A variety of other tools operate or inspect source code such as go vet,
go lint, goimport, and others. What adjustments need to be made needs to be
decided on a case-by-case basis.
## 5. Implementation
There are many open questions that need to be answered by an implementation.
To mention a few of them:
Are aliases represented somehow as “first-class” citizens in a compiler and
go/types, or are they immediately “resolved” internally to the original names?
For go/types specifically, adonovan@golang points out that a first-class
representation may have an impact on the go/types API and potentially affect
many tools. For instance, type switches assuming only the kinds of objects now
in existence in go/types would need to be extended to handle aliases, should
they show up in the public API. The go/types’ Info.Uses map, which currently
maps identifiers to objects, will require especial attention: Should it record
the alias to object references, or only the original names?
At first glance, since an alias is simply another name for an object, it would
seem that an implementation should resolve them immediately, making aliases
virtually invisible to the API (we may keep track of them internally only for
better error messages). On the other hand, they need to be exported and might
need to show up in go/types’ Info.Uses map (or some additional variant thereof)
so that tools such as guru have access to the alias names.
To be prototyped.
## 6. Other use cases
Alias declarations facilitate the construction of larger-scale libraries or
"components". For organizational and size reasons it often makes sense to split
up a large library into several sub-packages. The exported API of a sub-package
is driven by internal requirements of the component and may be only remotely
related to its public API. Alias declarations make it possible to "pull out"
the relevant declarations from the various sub-packages and collect them in
a single top-level package that represents the component's API.
The other packages can be organized in an "internal" sub-directory,
which makes them virtually inaccessible through the `go build` command (they
cannot be imported).
TODO(gri): Expand on use of alias declarations for protocol buffer's
"import public" feature.
TODO(gri): Expand on use of alias declarations instead of "dot-imports".
# Appendix
## A1. Syntax changes
The syntax changes necessary to accommodate alias declarations are limited
and concentrated. There is a new declaration specification called AliasSpec:
**AliasSpec = identifier "=>" PackageName "." identifier .**
An AliasSpec binds an identifier, the alias name, to the object (constant,
type, variable, or function) the alias refers to. The object must be specified
via a (possibly qualified) identifier. The aliased object must be a constant,
type, variable, or function, depending on whether the AliasSpec is within a
constant, type, variable, of function declaration.
Alias specifications may be used with any of the existing constant, type,
variable, or function declarations. The respective syntax productions are
extended as follows, with the extensions marked in bold:
ConstDecl = "const" ( ConstSpec | "(" { ConstSpec ";" } ")" ) .
ConstSpec = IdentifierList [ [ Type ] "=" ExprList ] **| AliasSpec** .
TypeDecl = "type" ( TypeSpec | "(" { TypeSpec ";" } ")" ) .
TypeSpec = identifier Type **| AliasSpec** .
VarDecl = "var" ( VarSpec | "(" { VarSpec ";" } ")" ) .
VarSpec = IdentList ( Type [ "=" ExprList ] | "=" ExprList ) **| AliasSpec** .
FuncDecl = "func" FunctionName ( Function | Signature ) **| "func" AliasSpec** .
## A2. Alternatives to this proposal
For completeness, we mention several alternatives.
1) Do nothing (wait for Go 2). The easiest solution, but it does not address
the problem.
2) Permit alias declarations for types only, use the existing work-arounds
otherwise. This would be a “minimal” solution for the problem. It would
require the use of work-arounds for all other objects (constants, variables,
and functions). Except for variables, those work-arounds would not be too
onerous. Finally, this would not require the introduction of a new operator
since "=" could be used.
3) Permit re-export of imports, or generalize imports. One might come up with
a notation to re-export all objects of an imported package wholesale,
accessible under the importing package name. Such a mechanism would address
the incremental refactoring problem and also permit the easy construction of
some sort of “super-package” (or component), the API of which would be the sum
of all the re-exported package APIs. This would be an “all-or-nothing” approach
that would not permit control over which objects are re-exported or under what
name. Alternatively, a generalized import scheme (discussed earlier in this
document) may provide a more fine-grained solution.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/18130-type-alias.md | # Proposal: Type Aliases
Authors: Russ Cox, Robert Griesemer
Last updated: December 16, 2016
Discussion at https://golang.org/issue/18130.
## Abstract
We propose to add to the Go language a type alias declaration, which introduces an alternate name for an existing type. The primary motivation is to enable gradual code repair during large-scale refactorings, in particular moving a type from one package to another in such a way that code referring to the old name interoperates with code referring to the new name. Type aliases may also be useful for allowing large packages to be split into multiple implementation packages with a single top-level exported API, and for experimenting with extended versions of existing packages.
## Background
The article [Codebase Refactoring (with help from Go)](https://talks.golang.org/2016/refactor.article) presents the background for this change in detail.
In short, one of Go's goals is to scale well to large codebases. In those codebases, it's important to be able to refactor the overall structure of the codebase, including changing which APIs are in which packages. In those large refactorings, it is important to support a transition period in which the API is available from both the old and new locations and references to old and new can be mixed and interoperate. Go provides workable mechanisms for this kind of change when the API is a const, func, or var, but not when the API is a type. There is today no way to arrange that oldpkg.OldType and newpkg.NewType are identical and that code referring to the old name interoperates with code referring to the new name. Type aliases provide that mechanism.
This proposal is a replacement for the [generalized alias proposal](https://golang.org/design/16339-alias-decls) originally targeted for, but held back from, Go 1.8.
## Proposal
The new type declaration syntax `type T1 = T2` declares `T1` as a _type alias_ for `T2`. After such a declaration, T1 and T2 are [identical types](https://golang.org/ref/spec#Type_identity). In effect, `T1` is merely an alternate spelling for `T2`.
The language grammar changes by modifying the current definition of TypeSpec from
TypeSpec = identifier Type .
to
TypeSpec = identifier [ "=" ] Type .
Like in any declaration, T1 must be an [identifier](https://golang.org/ref/spec#Identifiers). If T1 is an [exported identifier](https://golang.org/ref/spec#Exported_identifiers), then T1 is exported for use by importing packages. There are no restrictions on the form of `T2`: it may be [any type](https://golang.org/ref/spec#Type), including but not limited to types imported from other packages. Anywhere a TypeSpec is allowed today, a TypeSpec introducing a type alias is valid, including inside function bodies.
Note that because T1 is an alternate spelling for T2, nearly all analysis of code involving T1 proceeds by first expanding T1 to T2. In particular, T1 is not necessarily a [named type](https://golang.org/ref/spec#Types) for purposes such as evaluating [assignability](https://golang.org/ref/spec#Assignability).
To make the point about named types concrete, consider:
type Name1 map[string]string
type Name2 map[string]string
type Alias = map[string]string
According to [Go assignability](https://golang.org/ref/spec#Assignability), a value of type Name1 is assignable to map[string]string (because the latter is not a named type) but a value of type Name1 is not assignable to Name2 (because both are named types, and the names differ). In this example, because Alias is an alternate spelling for map[string]string, a value of type Name1 is assignable to Alias (because Alias is the same as map[string]string, which is not a named type).
Note: It’s possible that due to aliases, the spec term “named type” should be clarified or reworded in some way, or a new term should replace it, like “declared type”. This proposal uses words like “written” or “spelled” when describing aliases to avoid the term “named”. We could also use a better pair of names than “type declaration” and “type alias declaration”.
### Comparison of type declarations and type aliases
Go already has a [type declaration](https://golang.org/ref/spec#Type_declarations) `type Tnamed Tunderlying`. That declaration defines a new type Tnamed, different from (not identical to) Tunderlying. Because Tnamed is different from all other types, notably Tunderlying, composite types built from Tnamed and Tunderlying are different. For example, these pairs are all different types:
- *Tnamed and *Tunderlying
- chan Tnamed and chan Tunderlying
- func(Tnamed) and func(Tunderlying)
- interface{ M() Tnamed } and interface{ M() Tunderlying }
Because Tnamed and Tunderlying are different types, a Tunderlying stored in an interface value x does not match a type assertion `x.(Tnamed)` and does not match a type switch `case Tnamed`; similarly, a Tnamed does not match `x.(Tunderlying)` nor `case Tunderlying`.
Tnamed, being a named type, can have [method declarations](https://golang.org/ref/spec#Method_declarations) associated with it.
In contrast, the new type alias declaration `type T1 = T2` defines T1 as an alternate way to write T2. The two _are_ identical, and so these pairs are all identical types:
- *T1 and *T2
- chan T1 and chan T2
- func(T1) and func(T2)
- interface{ M() T1 } and interface{ M() T2 }
Because T1 and T2 are identical types, a T2 stored in an interface value x does match a type assertion `x.(T1)` and does match a type switch `case T1`; similarly a T1 does match `x.(T2)` and `case T2`.
Because T1 and T2 are identical types, it is not valid to list both as different cases in a type switch, just as it is not valid to list T1 twice or T2 twice. (The spec already says, “[The types listed in the cases of a type switch must all be different.](https://golang.org/ref/spec#Type_switches)”)
Since T1 is just another way to write T2, it does not have its own set of method declarations. Instead, T1’s method set is the same as T2’s. At least for the initial trial, there is no restriction against method declarations using T1 as a receiver type, provided using T2 in the same declaration would be valid.
Note that if T1 is an alias for a type T2 defined in an imported package, method declarations using T1 as a receiver type are invalid, just as method declarations using T2 as a receiver type are invalid.
### Type cycles
In a type alias declaration, in contrast to a type declaration, T2 must never refer, directly or indirectly, to T1. For example `type T = *T` and `type T = struct { next *T }` are not valid type alias declarations. In contrast, if the equals signs were dropped, those would become valid ordinary type declarations. The distinction is that ordinary type declarations introduce formal names that provide a way to describe the recursion. In contrast, aliases must be possible to “expand out”, and there is no way to expand out an alias like `type T = *T`.
### Relationship to byte and rune
The language specification already defines `byte` as an alias for `uint8` and similarly `rune` as an alias for `int32`, using the word alias as an informal term. It is a goal that the new type declaration semantics not introduce a different meaning for alias. That is, it should be possible to describe the existing meanings of `byte` and `uint8` by saying that they behave as if predefined by:
type byte = uint8
type rune = int32
### Effect on embedding
Although T1 and T2 may be identical types, they are written differently. The distinction is important in an [embedded field](https://golang.org/ref/spec#Struct_types) within a struct. In this case, the effective name of the embedded field depends on how the type was written: in the struct
type MyStruct struct {
T1
}
the field always has name T1 (and only T1), even when T1 is an alias for T2. This choice avoids needing to understand how T1 is defined in order to understand the struct definition. Only if (or when) MyStruct's definition changes from using T1 to using T2 would the field name change. Also, T2 may not be a named type at all: consider embedding a MyMap defined by `type MyMap = map[string]interface{}`.
Similarly, because an embedded T1 must be accessed using the name T1, not T2, it is valid to embed both T1 and T2 (assuming T2 is a named type):
type MyStruct struct {
T1
T2
}
References to myStruct.T1 or myStruct.T2 resolve to the corresponding fields. (Of course, this situation is unlikely to arise, and if T1 (= T2) is a struct type, then any fields within the struct would be inaccessible by direct access due to the usual [selector ambiguity rules](https://golang.org/ref/spec#Selectors).
These choices also match the current meaning today of the byte and rune aliases. For example, it is valid today to write
type MyStruct struct {
byte
uint8
}
Because neither type has methods, that declaration is essentially equivalent to
type MyStruct struct {
byte byte
uint8 uint8
}
## Rationale
An alternate approach would be [generalized aliases](https://golang.org/design/16339-alias-decls), as discussed during the Go 1.8 cycle. However, generalized aliases overlap with and complicate other declaration forms, and the only form where the need is keenly felt is types. In contrast, this proposal limits the change in the language to types, and there is still plenty to do; see the Implementation section.
The implementation changes for type aliases are smaller than for generalized aliases, because while there is new syntax there is no need for a new AST type (the new syntax is still represented as an ast.TypeSpec, matching the grammar). With generalized aliases, any program processing ASTs needed updates for the new forms. With type aliases, most programs processing ASTs care only that they are holding a TypeSpec and can treat type alias declarations and regular type declarations the same, with no code changes. For example, we expect that cmd/vet and cmd/doc may need no changes for type aliases; in contrast, they both crashed and needed updates when generalized aliases were tried.
The question of the meaning of an embedded type alias was identified as [issue 17746](https://github.com/golang/go/issues/17746), during the exploration of general aliases. The rationale for the decision above is given inline with the decision. A key property is that it matches the current handling of byte and rune, so that the language need not have two different classes of type alias (predefined vs user-defined) with different semantics.
The syntax and distinction between type declarations and type alias declarations ends up being nearly identical to that of [Pascal](https://www.freepascal.org/docs-html/ref/refse19.html). The alias syntax itself is also the same as in later languages like [Rust](https://doc.rust-lang.org/book/type-aliases.html).
## Compatibility
This is a new language feature; existing code continues to compile, in keeping
with the [compatibility guidelines](https://golang.org/doc/go1compat).
In the libraries, there is a new field in go/ast's TypeSpec, and there is a new type in go/types, namely types.Alias (details in the Implementation section below). These are both permitted changes at the library level. Code that cares about the semantics of Go types may need updating to handle aliases. This affects programming tools and is unavoidable with nearly any language change.
## Implementation
Since this is a language change, the implementation affects many pieces of the
Go distribution and subrepositories.
The goal is to have basic functionality ready and checked in at the start of the Go 1.9 development cycle, to enable exploration and experimentation by users during the
entire three month development cycle.
The implementation work is split out below, with owners and target dates listed (Feb 1 is beginning of Go 1.9).
### cmd/compile
The gc compiler needs to be updated to parse the new syntax, to apply the type checking rules appropriately, and to include appropriate information in its export format.
Minor compiler changes will also be needed to generate proper reflect information for embedded fields, but that is a current bug in the handling of byte and rune. Those will be handled as part of the reflect changes.
Owner: gri, mdempsky, by Jan 31
### gccgo
Gccgo needs to be updated to parse the new syntax, to apply the type checking rules appropriately, and to include appropriate information in its export format.
It may also need the same reflect fix.
Owner: iant, by Jan 31
### go/ast
Reflecting the expansion of the grammar rule, ast.TypeSpec will need some additional field to declare that a type specifier defines a type alias. The likely choice is `EqualsPos token.Pos`, with a zero pos meaning there is no equals sign (an ordinary type declaration).
Owner: gri, by Jan 31
### go/doc
Because go/doc only works with go/ast, not go/types, it may need no updates.
Owner: rsc, by Jan 31
### go/parser
The parser needs to be updated to recognize the new TypeSpec grammar including an equals sign and to generate the appropriate ast.TypeSpec. There should be no user-visible API changes to the package.
Owner: gri, by Jan 31
### go/printer
The printer needs to be updated to print an ast.TypeSpec with an equal sign when present, including lining up equal signs in adjacent type alias specifiers.
Owner: gri, by Jan 31
### go/types
The types.Type interface is implemented by a set of concrete implementations, one for each kind of type. Most likely, a new concrete implementation \*types.Alias will need to be defined.
The \*types.Alias form will need a new method `Defn() type.Type` that gives the definition of the alias.
The types.Type interface defines a method `Underlying() types.Type`. A \*types.Alias will implement Underlying as Defn().Underlying(), so that code calling Underlying finds its way through both aliases and named types to the underlying form.
Any clients of this package that attempt an exhaustive type switch over types.Type possibilities will need to be updated; clients that type switch over typ.Underlying() may not need updates.
Note that code (like in the subrepos) that needs to compile with Go 1.8 will not be able to use the new API in go/types directly. Instead, there should probably be a new subrepo package, say golang.org/x/tools/go/types/typealias, that contains pre-Go 1.9 and Go 1.9 implementations of a combined type check vs destructure:
func IsAlias(t types.Type) (name *types.TypeName, defn types.Type, ok bool)
Code in the subrepos can import this package and use this function any time it needs to consider the possibility of an alias type.
Owner: gri, adonovan, by Jan 31
### go/importer
The go/importer’s underlying import data decoders must be updated so they can understand export data containing alias information. This should be done more or less simultaneously with the compiler changes.
Owner: gri, by Jan 31 (for go/internal/gcimporter)
Owner: gri, by Jan 31 (for go/internal/gccgoimporter)
### reflect
Type aliases are mostly invisible at runtime. In particular, since reflect uses reflect.Type equality as type identity, aliases must in general not appear in the reflect runtime data or API.
An exception is the names of embedded fields. To date, package reflect has assumed that the name can be inferred from the type of the field. Aliases make that not true. Embedding type T1 = map[string]interface{} will show up as an embedded field of type map[string]interface{}, which has no name. Embedding type T1 = T2 will show up as an embedded field of type T2, but it has name T1.
Reflect already gets this [wrong for the existing aliases byte and rune](https://github.com/golang/go/issues/17766). The fix for byte and rune should work unchanged for general type aliases as well.
The reflect.StructField already contains an `Anonymous bool` separate from `Name string`. Fixing the problem should be a matter of emitting the right information in the compiler and populating StructField.Name correctly.
There should be no API changes that affect clients of reflect.
Owner: rsc, by Jan 31
### cmd/api
The API checker cmd/api contains a type switch over implementations of types.Type. It will need to be updated to handle types.Alias.
Owner: bradfitz, by Jan 31
### cmd/doc
Both godoc and cmd/doc (invoked as 'go doc') need to be able to display type aliases.
If possible, the changes to go/ast, go/doc, go/parser, and go/printer should be engineered so that godoc and 'go doc' need no changes at all, other than compiling against the newer versions of these packages. In particular, having no new go/ast type means that type switches need not be updated, and existing code processing TypeSpec is likely to continue to work for type alias-declaring TypeSpecs.
(It would be nice to have the same property for go/types, but that doesn't seem possible: go/types must expose the new concept of alias.)
Owner: rsc, by Jan 31
### cmd/gofmt
Gofmt should need no updating beyond compiling with the new underlying packages.
Owner: gri, by Jan 31
### cmd/vet
Vet uses go/types but does not appear to have any exhaustive switches on types.Type. It may need no updating.
Owner: rsc, by Jan 31
### golang.org/x/tools/cmd/goimports
Goimports should need no updating beyond compiling with the new underlying packages. Goimports does care about the set of exported symbols from a package, but it already handles exported type definitions as represented by TypeSpecs; the same code should work unmodified for aliases.
Owner: bradfitz, by Jan 31
### golang.org/x/tools/cmd/godex
May not need much updating. printer.writeTypeInternal has a switch on types.Type with a default that does p.print(t.String()). This may be right for aliases and may just work, or may need to be updated.
Owner: gri, by Apr 30.
### golang.org/x/tools/cmd/guru
Various switches on types.Type that may need updating.
Owner: adonovan, by Feb 28.
### golang.org/x/tools/go/callgraph/rta
Has type switches on types.Type.
Owner: adonovan, by Apr 30.
### golang.org/x/tools/go/gcexportdata
Implemented in terms of golang.org/x/tools/go/gcimporter15, which contains type switches on types.Type. Must also update to understand aliases in export data. golang.org/x/tools/go/gcimporter15 contains mostly modified copies of the code under go/internal/gcimporter. They should be updated simultaneously.
Owner: gri, by Jan 31.
### golang.org/x/tools/go/internal/gccgoimporter
Must update to understand aliases in export data. This code is mostly a modified copy of the code under go/internal/gccgoimporter. They should be updated simultaneously.
Owner: gri, by Apr 30.
### golang.org/x/tools/go/pointer
Semantically, type aliases should have very little effect. May not need significant updates, but there are a few type switches on types.Type.
Owner: adonovan, matloob, by Apr 30.
### golang.org/x/tools/go/ssa
Semantically, type aliases should have very little effect. May not need significant updates, but there are a few type switches on types.Type.
Owner: adonovan, matloob, by Apr 30.
### golang.org/x/tools/go/types/typeutil
Contains an exhaustive type switch on types.Type in Hasher.hashFor. Will need to be updated for types.Alias.
Owner: gri, adonovan, by Apr 30.
### golang.org/x/tools/godoc/analysis
Contains mentions of types.Named, but apparently no code with a type switch on types.Type (`case *types.Named` never appears). It is possible that no updates are needed.
Owner: adonovan, gri, by Jan 31.
### golang.org/x/tools/refactor
Has a switch on a types.Type of an embedded field to look for the type of the field and checks for \*types.Pointer pointing at \*types.Named and also \*types.Named. Will need to allow \*types.Alias in both places as well.
Owner: adonovan, matloob, by Apr 30.
## Open issues (if applicable)
As noted above, the language specification term “named type” may need to be rephrased in some places. This proposal is clear on the semantics, but alternate phrasing may help make the specification clearer.
The [discussion summary](https://github.com/golang/go/issues/18130#issue-192757828) includes a list of possible restrictions and concerns for abuse. While it is likely that many concerns will not in practice have the severity to merit restrictions, we may need to work out agreed-upon guidance for uses of type aliases. In general this is similar to any other language feature: the first response to potential for abuse is education, not restrictions.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/12800-sweep-free-alloc.md | # Proposal: Dense mark bits and sweep-free allocation
or, *How I learned to stop worrying and love the bitmap*
Author: Austin Clements
Last updated: 2015-09-30
Discussion at https://golang.org/issue/12800.
## Abstract
This document proposes a change to memory allocation to eliminate the
need for the sweeper and a new representation for the mark bitmap that
enables this. This reduces the cost of allocation, significantly
improves the locality of the mark bitmap, and paves the way for future
advances to the Go GC that require multiple mark bits.
## Background
All current releases of Go up to and including Go 1.5 use a
*mark-sweep* garbage collector. As of Go 1.5, this works by
alternating between a mostly-concurrent mark phase and a concurrent
sweep phase. The mark phase finds all reachable objects and *marks*
them, leaving all unreachable objects *unmarked*. The sweep phase
walks through the entire heap, finds all unmarked objects, and adds
them to free lists, which the allocator in turn uses to allocate new
objects.
However, this sweep phase is, in a sense, redundant. It primarily
transforms one representation of the free heap—the mark bits—into
another representation of the free heap—the free lists. Not only does
this take time, but the free list representation is unfriendly to
modern CPUs since it is not very cacheable and accesses to it are hard
to predict. Furthermore, the current mark representation is also
cache-unfriendly, which adds even more to the cost of sweeping.
This document proposes a design for eliminating the sweeper. The key
idea is to allocate directly using the mark bitmap, foregoing the free
list representation entirely. Doing this efficiently requires a new,
dense representation for mark bits that enables fast scanning and
clearing. This representation also makes it easy to maintain multiple
mark bitmaps simultaneously. We introduce the dense bitmap
representation first. We then present a simple system for allocation
based on two mark bitmaps that eliminates the free list and hence the
need for the sweeper.
## Motivation
Typical Go programs spend about 5% of their CPU in the sweeper or in
cache misses induced by the free list representation. Hence, if we can
significantly reduce or eliminate the cost of the sweeping from
allocation and improve the free set representation, we can improve
overall program performance.
To measure this, we ran the go1 benchmark suite, just the BinaryTree17
benchmark with `-benchtime 5s`, and the x/benchmarks garbage benchmark
with `-benchmem 1024`. These represent a range of CPU-intensive and
allocation-intensive benchmarks. In all cases, GOMAXPROCS is 4. The
CPU time breaks down as follows:
| | go1 (all) | BinaryTree17 | garbage 1GB |
| --- | ---------:| ------------:| -----------:|
| CPU in mark | 2.8% | 4.0% | 34.7% |
| CPU in sweep | 3.3% | 20.6% | 5.2% |
| CPU in mallocgc (excl. sweep, GC) | 6.8% | 39.0% | 15.8% |
|   % of mallocgc spent walking free list | 19.2% | 17.5% | 14.3% |
(Times were collected using pprof. mark shows samples matching
`\.gc$|gcBgMarkWorker|gcAssistAlloc|gchelper`. sweep shows
`mSpan_Sweep`. mallocgc shows `mallocgc -gcAssistAlloc -mSpan_Sweep`.)
This proposal replaces sweepone with a scan that should require
roughly 1ms of CPU time per heap GB per GC cycle. For BinaryTree17,
that’s less than 0.1% of its CPU time, versus 21% for the current
sweep. It replaces the cost of walking the free list in mallocgc with
what is likely to be a smaller cost of sequentially scanning a bitmap.
It’s likely to have negligible effect on mark performance. Finally, it
should increase the heap footprint by roughly 0.02%.
## Dense mark bitmaps
Currently, the mark bits are stored in the *heap bitmap*, which is a
structure that stores two bits for every word of the heap, using a
simple formula to map between a heap address and a bitmap address. The
mark bit is stored in one of the bits for the first word of every
object. Because mark bits are "on the side," spans can be efficiently
subdivided into smaller object sizes (especially power-of-two sizes).
However, this sparse bitmap is expensive to scan and clear, as it
requires a strided, irregular traversal of memory, as shown in
figure 1. It also makes it difficult (albeit not impossible) to
maintain two sets of mark bits on word-sized objects, which is
necessary for sweep-free allocation.
![](12800/sparse.png)
**Figure 1.** In Go 1.5, mark bits are sparse and irregularly strided.
Therefore, this proposal separates the mark bits from the heap bitmap
into a dedicated mark bitmap structure. The difficulty here is,
because objects are different sizes and a given span can be freed and
reused for a different size class any number of times, a dense mark
bitmap cannot be addressed solely based on an object’s address. It
must be indirected through the object’s span.
One solution is to store the mark bits for the objects in each span as
a dense bit array in the span itself, either before or after the
objects. This dense representation is more efficient to scan and clear
than the sparse representation, but still requires visiting every span
to do so. Furthermore, it increases memory fragmentation, especially
for power-of-two allocations, which currently have zero external
fragmentation.
We propose maintaining a dense mark bitmap, but placing it outside of
the spans and in mostly contiguous memory by allocating the mark
bitmap anew *for every GC cycle*. In both the current sparse bitmap
and the strawman dense bitmap above, an object’s mark bit is in the
same location for the lifetime of that object. However, an object’s
mark only needs to persist for two GC cycles. By allocating the mark
bitmap anew for each GC cycle, we can avoid impacting span
fragmentation and use contiguous memory to enable bulk clearing of the
bitmap. Furthermore, discarding and recreating the bitmap on every
cycle lets us use a trivial scheme for allocating mark bitmaps to
spans while simultaneously dealing with changing heap layouts: even
though spans are reused for different size classes, any given span can
change size class at most once per GC cycle, so there’s no need for
sophisticated management of a mark bitmap that will only last two
cycles.
Between GC cycles, the runtime will prepare a fresh mark bitmap for
the upcoming mark phase, as shown in figure 2. It will traverse the
list of in-use spans and use a simple arena-style linear allocator to
assign each span a mark bitmap sized for the number of objects in that
span. The arena allocator will obtain memory from the system in
reasonably large chunks (e.g., 64K) and bulk zero it. Likewise, any
span that transitions from free to in-use during this time will also
be allocated a mark bitmap.
![](12800/dense.png)
**Figure 2.** Proposed arena allocation of dense mark bitmaps. For
illustrative purposes, bitmaps are shown allocated without alignment
constraints.
When the mark phase begins, all in-use spans will have zeroed mark
bitmaps. The mark phase will set the mark bit for every reachable
object. Then, during mark termination, the garbage collector will
transfer this bitmap to the allocator, which can use it to find free
objects in spans that were in-use at the beginning of the mark phase.
Any spans that are allocated after the mark phase (including after
mark termination) will have a nil allocation bitmap, which is
equivalent to all objects in that span being unmarked and allows for
bump-pointer allocation within that span. Finally, when the allocator
is done with the mark bitmap, the whole arena can be bulk freed.
## Dense mark bitmap performance
The entire process of allocating and clearing the new mark bitmap will
require only about 1 ms of CPU time per heap GB. Walking the list of
in-use spans requires about 1 ms per heap GB and, thanks to the arena
allocation, zeroing the bitmap should add only 40 µs per heap GB,
assuming 50 GB/sec sequential memory bandwidth and an average object
size of 64 bytes.
Furthermore, the memory overhead of the mark bitmap is minimal. The
instantaneous average object size of "go build std" and "go test
-short std" is 136 bytes and 222 bytes, respectively. At these sizes,
and assuming two bitmaps, the mark bitmaps will have an overhead of
only 0.18% (1.9 MB per heap GB) and 0.11% (1.2 MB per heap GB),
respectively. Even given a very conservative average object size of 16
bytes, the overhead is only 1.6% (16 MB per heap GB).
Dense mark bits should have negligible impact on mark phase
performance. Because of interior pointers, marking an object already
requires looking up that object’s span and dividing by the span’s
object size to compute the object index. With the sparse mark bitmap,
it requires multiplying and adding to compute the object’s base
address; three subtractions, three shifts, and a mask to compute the
location and value of the mark bit; and a random atomic memory write
to set the mark bit. With the dense mark bitmap, it requires reading
the mark bitmap pointer from the span (which can be placed on the same
cache line as the metadata already read); an addition, two shifts, and
a mask to compute the location and value of the mark bit; and a random
atomic memory write to set the mark bit.
Dense mark bits should simplify some parts of mark that currently
require checks and branches to treat the heap bitmap for the first two
object words differently from the heap bitmap for the remaining words.
This may improve branch predictor behavior and hence performance of
object scanning.
Finally, dense mark bits may slightly improve the performance of
unrolling the heap bitmap during allocation. Currently, small objects
require atomic writes to the heap bitmap because they may race with
the garbage collector marking objects. By separating out the mark
bits, the sole writer to any word of the heap bitmap is the P
allocating from that span, so all bitmap writes can be non-atomic.
## Sweep-free allocation
The key idea behind eliminating the sweeper is to use the mark bitmap
directly during allocation to find free objects that can be
reallocated, rather than transforming this bitmap into a free list and
then allocating using the free list. However, in a concurrent garbage
collector some second copy of the heap free set is necessary for the
simple reason that the mutator continues to allocate objects from the
free set at the same time the concurrent mark phase is constructing
the new free set.
In the current design, this second copy is the free list, which is
fully constructed from the mark bits by the time the next mark phase
begins. This requires essentially no space because the free list can
be threaded through the free objects. It also gives the system a
chance to clear all mark bits in preparation for the next GC cycle,
which is expensive in the sparse mark bitmap representation, so it
needs to be done incrementally and simultaneously with sweeping the
marks. The flow of information in the current sweeper is shown in
figure 3.
![](12800/sweep-flow.png)
**Figure 3.** Go 1.5 flow of free object information.
We propose simply using two sets of mark bits. At the end of the mark
phase, the object allocator switches to using the mark bitmap
constructed by the mark phase and the object allocator’s current mark
bitmap is discarded. During the time between mark phases, the runtime
allocates and bulk zeroes the mark bitmap for the next mark phase. The
flow of information about free objects in this design is shown in
figure 4.
![](12800/mark-flow.png)
**Figure 4.** Proposed flow of free object information.
To allocate an object, the object allocator obtains a cached span or
allocates a new span from the span allocator as it does now. The span
allocator works much like it does now, with the exception that where
the span allocator currently sweeps the span, it will now simply reset
its bitmap pointer to the beginning of the span’s bitmap. With a span
in hand, the object allocator scans its bitmap for the next unmarked
object, updates the span’s bitmap pointer, and initializes the object.
If there are no more unmarked objects in the span, the object
allocator acquires another span. Note that this may happen repeatedly
if the allocator obtains spans that are fully marked (in contrast,
this is currently handled by the sweeper, so span allocation will
never return a fully marked span).
Most likely, it makes sense to cache an inverted copy of the current
word of the bitmap in the span. Allocation can then find the next set
bit using processor ctz intrinsics or efficient software ctz and bit
shifts to maintain its position in the word. This also simplifies the
handling of fresh spans that have nil allocation bitmaps.
### Finalizers
One complication of this approach is that sweeping is currently also
responsible for queuing finalizers for unmarked objects. One solution
is to simply check the mark bits of all finalized objects between GC
cycles. This could be done in the same loop that allocates new mark
bits to all spans after mark termination, and would add very little
cost. In order to do this concurrently, if the allocator obtained a
span before the garbage collector was able to check it for finalizers,
the allocator would be responsible for queuing finalizers for objects
on that span.
## Compatibility
This proposal only affects the performance of the runtime. It does not
change any user-facing Go APIs, and hence it satisfies Go 1
compatibility.
## Implementation
This work will be carried out by Austin Clements and Rick Hudson
during the Go 1.6 development cycle.
Figure 5 shows the components of this proposal and the dependencies
between implementing them.
![](12800/plan.png)
**Figure 5.** Implementation dependency diagram.
We will implement dense mark bitmaps first because it should be fairly
easy to update the current sweeper to use the dense bitmaps and this
will enable multiple mark bitmaps. We will then build sweep-free
allocation on top of this. Sweep-free allocation makes it difficult to
detect completely unmarked spans and return them to the span
allocator, so we will most likely want to implement eager freeing of
unmarked spans first, as discussed in issue
[#11979](https://golang.org/issue/11979), though this is not strictly
necessary. At any point after dense mark bitmaps are in place, we can
implement the optimizations to the heap bitmap discussed in "Dense
mark bitmap performance."
## Related work
Dense mark bitmaps have a long history in garbage collectors that use
segregated-fits allocation. The Boehm conservative collector
[Boehm, 1988] used dense mark bitmaps, but stored each span’s bitmap
along with that span’s objects, rather than as part of large,
contiguous allocations. Similarly, Garner [2007] explores a mark-sweep
collector that supports both mark bits in object headers and dense
"side bitmaps." Garner observes the advantages of dense mark bitmaps
for bulk zeroing, and concludes that both approaches have similar
marking performance, which supports our prediction that switching to
dense mark bitmaps will have negligible impact on mark phase
performance.
Traditionally, mark-sweep garbage collectors alternate between marking
and sweeping. However, there have various approaches to enabling
simultaneous mark and sweep in a concurrent garbage collector that
closely resemble our approach of allowing simultaneous mark and
allocation. Lamport [1976] introduced a "purple" extension to the
traditional tri-color abstraction that made it possible for the
sweeper to distinguish objects that were not marked in the previous
mark phase (and hence should be swept) from objects that are not yet
marked in the current mark phase, but may be marked later in the
phase. To reduce the cost of resetting these colors, Lamport’s
algorithm cycles through three different interpretations of the color
encoding. In contrast, our approach adheres to the tri-color
abstraction and simply alternates between two different bitmaps. This
means we have to reset the colors for every mark phase, but we arrange
the bitmap such that this cost is negligible. Queinnec’s "mark during
sweep" algorithm [Queinnec, 1989] alternates between two bitmaps like
our approach. However, unlike our approach, both Queinnec and Lamport
still depend on a sweeper to transform the mark bits into a free list
and to reset the mark bits back to white.
## Possible extensions
### 1-bit heap bitmap
With the mark bits no longer part of the heap bitmap, it’s possible we
could pack the heap bitmap more tightly, which would reduce its memory
footprint, improve cache locality, and may improve the performance of
the heap bitmap unroller (the most expensive step of malloc). One of
the two bits encoded in the heap bitmap for every word is a "dead"
bit, which forms a unary representation of the index of the last
pointer word of the object. Furthermore, it’s always safe to increase
this number (this is how we currently steal a bit for the mark bit).
We could store this information in base 2 in an object-indexed
structure and reduce overhead by only storing it for spans with a
sufficiently large size class (where the dead bit optimization
matters). Alternatively, we could continue storing it in unary, but at
lower fidelity, such as one dead bit per eight heap words.
### Reducing atomic access to mark bitmap
If the cost of atomically setting bits in the mark bitmap turns out to
be high, we could instead dedicate a byte per object for the mark.
This idea is mentioned in GC literature [Garner, 2007]. Obviously,
this involves an 8× increase in memory overhead. It’s likely that on
modern hardware, the cost of the atomic bit operation is small, while
the cost of increasing the cache footprint of the mark structure is
probably large.
Another way to reduce atomic access to the mark bitmap is to keep an
additional mark bitmap per P. When the garbage collector checks if an
object is marked, it first consults the shared bitmap. If it is not
marked there, it updates the shared bitmap by reading the entire word
(or cache line) containing the mark bit from each per-P mark bitmap,
combining these words using bitwise-or, and writing the entire word to
the shared bitmap. It can then re-check the bit. When the garbage
collector marks an object, it simply sets the bit in its per-P bitmap.
## References
Hans-Juergen Boehm and Mark Weiser. 1988. Garbage collection in an
uncooperative environment. Software Practice and Experience 18, 9
(September 1988), 807–820.
Robin Garner, Stephen M. Blackburn, and Daniel Frampton. 2007.
Effective prefetch for mark-sweep garbage collection. In Proceedings
of the 6th international symposium on Memory management (ISMM '07).
ACM, New York, NY, USA, 43–54.
Leslie Lamport. 1976. Garbage collection with multiple processes: An
exercise in parallelism. In International Conference on Parallel
Processing (ICPP). 50–54.
Christian Queinnec, Barbara Beaudoing, and Jean-Pierre Queille. 1989.
Mark DURING Sweep rather than Mark THEN Sweep. In Proceedings of the
Parallel Architectures and Languages Europe, Volume I: Parallel
Architectures (PARLE '89), Eddy Odijk, Martin Rem, and Jean-Claude
Syre (Eds.). Springer-Verlag, London, UK, UK, 224–237.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/55022-pgo.md | # Proposal: profile-guided optimization
Authors: Cherry Mui, Austin Clements, Michael Pratt
Last updated: 2022-09-12
Discussion at https://golang.org/issue/55022. \
Previous discussion at https://golang.org/issue/28262.
## Abstract
We propose adding support for profile-guided optimization (PGO) to the Go gc toolchain. PGO will enable the toolchain to perform application- and workload-specific optimizations based on run-time information. Unlike many compiler optimizations, PGO requires user involvement to collect profiles and feed them back into the build process. Hence, we propose a design that centers user experience and ease of deployment and fits naturally into the broader Go build ecosystem.
Our proposed approach uses low-overhead sample-based profiles collected directly from production deployments. This ensures profile data is representative of an application’s real workload, but requires the Go toolchain to cope with stale profiles and profiles from already-optimized binaries. We propose to use the standard and widely-deployed [`runtime/pprof`](https://pkg.go/dev/runtime/pprof) profiler, so users can take advantage of robust tooling for profile collection, processing, and visualization. Pprof is also supported on nearly every operating system, architecture, and deployment environment, unlike hardware-based profiling, which is higher fidelity but generally not available in cloud environments. Users will check in profiles to source control alongside source code, where the `go` tool can transparently supply them to the build and they naturally become part of reproducible builds and the SBOM. Altogether, Go’s vertical integration from build system to toolchain to run-time profiling creates a unique opportunity for a streamlined PGO user experience.
## Background
Profile-guided optimization (PGO), also known as feedback-driven optimization (FDO), is a powerful optimization technique that uses profiles of run-time behavior of a program to guide the compiler’s optimizations of future builds of that program. This technique can be applied to other build stages as well, such as source-code generation, link time or post-link time (e.g. LTO, [BOLT](https://research.fb.com/publications/bolt-a-practical-binary-optimizer-for-data-centers-and-beyond/), [Propeller](https://github.com/google/llvm-propeller/blob/plo-dev/Propeller_RFC.pdf)), and even run time.
PGO has several advantages over traditional, heuristic-based optimization. Many compiler optimizations have trade-offs: for example, inlining improves performance by reducing call overhead and enabling other optimizations; but inlining also increases binary size and hence I-cache pressure, so too much inlining can harm overall performance. Optimization heuristics aim to balance these trade-offs, but rarely achieve the right balance for peak performance of any particular application. Using profile data collected at run time, the compiler has information that is impossible to derive statically because it depends on an application's workload, or is simply too costly to compute within a reasonable compile time budget. Sometimes users turn to source-code level compiler directives such as "inline" directives ([issue 21536](https://golang.org/issue/21536)) to guide the optimizations. However, source-code level compiler directives also do not work well in all situations. For example, a library author would want to mark the functions that are important to the performance of that library for inlining. But to a program, that library may not be performance critical. If we inline all important functions in all libraries, it will result in slow builds, binary size blow-up, and perhaps slower run-time performance. PGO, on the other hand, can use information about the whole program's behavior, and apply optimizations to only the performance-critical part of the program.
## Related work
Various compilers for C/C++ and other languages support instrumentation-based PGO, for example, GCC's `-fprofile-generate` and `-fprofile-use` options, and LLVM's `-fprofile-instr-generate` and `-fprofile-instr-use` options.
GCC and LLVM also support sample-based PGO, such as GCC's `-fauto-profile` option and LLVM's `-fprofile-sample-use` option. They expect profiles collected from Linux perf and then converted to the GCC or LLVM format using the [AutoFDO tool](https://github.com/google/autofdo). For LLVM, LBR profiles are recommended but not strictly required.
Google's AutoFDO (for C/C++ programs) is built on LLVM's sample-based PGO, along with other toolings and mechanisms such as [Google-wide profiling (GWP)](https://research.google/pubs/pub36575). [AutoFDO](https://research.google/pubs/pub45290) improves the performance of C/C++ programs by 5–15% in Google datacenters.
Profile data is also used in various link-time and post link-time optimizers, such as GCC's LIPO, LLVM ThinLTO, BOLT, and Propeller.
## Discussion
### AutoFDO vs. instrumentation-based FDO
In traditional, instrumentation-based FDO, developers use the following process:
1. Build the binary with compiler-inserted instrumentation to collect call and branch edge counts
2. Run a set of benchmarks with instrumentation to collect the profiles
3. Build the final binary based on the profiles
This process looks simple and straightforward. It generally does not require any special tooling support (besides the instrumentation and the optimizations in the compiler) because the profile is used immediately for the optimized build. With instrumentation, it is relatively easy to collect a broad range of data beyond branches, for example, specific values of variables or function parameters in common cases.
But it has a few key drawbacks. The instrumentation typically has a non-trivial overhead, making it generally infeasible to run the instrumented programs directly in production. Therefore, this approach requires high-quality benchmarks with representative workloads. As the source code evolves, one needs to update the benchmarks, which typically requires manual work. Also, the workload may shift from time to time, making the benchmarks no longer representative of real use cases. This workflow may also require more manual steps for running the benchmarks and building the optimized binary.
To address the issues above, AutoFDO is a more recent approach. Instead of instrumenting the binary and collecting profiles from special benchmark runs, sample-based profiles are collected directly from real production uses using regular production builds. The overhead of profile collection is low enough that profiles can be regularly collected from production. A big advantage of this approach is that the profiles represent the actual production use case with real workloads. It also simplifies the workflow by eliminating the instrumentation and benchmarking steps.
The AutoFDO style workflow imposes more requirements on tooling. As the profiles are collected from production binaries, which are already optimized (even with PGO), it may have different performance characteristics from a non-PGO binary and the profiles may be skewed. For example, a profile may indicate a function is “hot”, causing the compiler to optimize that function such that it no longer takes much time in production. When that binary is deployed to production, the profile will no longer indicate that function is hot, so it will not be optimized in the next PGO build. If we apply PGO iteratively, the performance of the output binaries may not be stable, resulting in "flapping" [[Chen ‘16](https://research.google/pubs/pub45290), Section 5.2]. For production binaries it is important to have predictable performance, so we need to maintain iterative stability.
Also, while a binary is running in production and profiles are being collected, there may be active development going on and the source code may change. If profiles collected from the previous version of the binary cannot be used to guide optimizations for a new version of the program, deploying a new version may cause performance degradation. Therefore it is a requirement that profiles be robust to source code changes, with minimum performance degradation. Finally, the compiler may change from time to time. Similarly, profiles collected with binaries built with the previous version of the compiler should still provide meaningful optimization hints for the new version of the compiler.
It is also possible to run a traditional FDO-style build using AutoFDO. To do so, one does not need to instrument the binary, but just run the benchmarks with sample-based profiling enabled. Then immediately use the collected profiles as input to the compiler. In this case, one can use the tooling for AutoFDO, just with profiles from the benchmarks instead of those from production.
As the tooling for AutoFDO is more powerful, capable of handling most of the manual-FDO style use cases, and in some circumstances greatly simplifies the user experience, we choose the AutoFDO style as our approach.
### Requirements
#### Reproducible builds
The Go toolchain produces reproducible builds; that is, the output binary is byte-for-byte identical if the inputs to a build (source code, build flags, some environment variables) are identical. This is critical for the build cache to work and for aspects of software supply-chain security, and can be greatly helpful for debugging.
For PGO builds we should maintain this feature. As the compiler output depends on the profiles used for optimization, the content of the profiles will need to be considered as input to the build, and be incorporated into the build cache key calculation.
For one to easily reproduce the build (for example, for debugging), the profiles need to be stored in known stable locations. We propose that, by default, profiles are stored in the same directory as the main package, alongside other source files, with the option of specifying a profile from another location.
#### Stability to source code and compiler changes
As discussed above, as the source code evolves the profiles could become stale. It is important that this does not cause significant performance degradation.
For a code change that is local to a small number of functions, most functions are not changed and therefore profiles can still apply. For the unchanged functions their absolute locations may change (such as line number shifts or code moving). We propose using function-relative line numbers so PGO tolerates location shifts. Using only the function name and relative line number also handles source file renames.
For functions that are actually changed, the simplest approach is to consider the old profile invalid. This would cause performance degradation but it is limited to a single function level. There are several possible approaches to detecting function changes, such as requiring access to the previous source code, or recording a hash of each function AST in the binary and copying it to the profile. With more detailed information, the compiler could even invalidate the profiles for only the sub-trees of the AST that actually changed. Another possibility is to not detect source code changes at all. This can lead to suboptimal optimizations, but [AutoFDO](https://research.google/pubs/pub45290) showed that this simple solution is surprisingly effective because profiles are typically flat and usually not all hot functions change at the same time.
For large-scale refactoring, much information, such as source code locations and names of many functions, can change at once. To avoid invalidating profiles for all the changed functions, a tool could map the old function names and locations in the profile to a new profile with updated function names and locations.
Profile stability across compiler changes is mostly not a problem if profiles record source level information.
#### Iterative stability
Another aspect of stability, especially with the AutoFDO approach, is iterative stability, also discussed above. Because we expect PGO-optimized binaries to be deployed to production and also expect the profiles that drive a PGO build to come from production, it’s important that we support users collecting profiles from PGO-optimized binaries for use in the next PGO build. That is, with an AutoFDO approach, the build/deploy/profile process becomes a closed loop. If we’re not careful in the implementation of PGO, this closed loop can easily result in performance “flapping”, where a profile-driven optimization performed in one build interferes with the same optimization happening in the next build, causing performance to oscillate.
Based on the findings from AutoFDO, we plan to tackle this on a case-by-case basis. For PGO-based inlining, since call stacks will include inlined frames, it’s likely that hot calls will remain hot after inlining. For some optimizations, we may simply have to be careful to consider the effect of the optimization on profiles.
### Profile sources and formats
There are multiple ways to acquire a profile. Pprof profiles from the `runtime/pprof` package are widely used in the Go community. And the underlying implementation mechanisms (signal-based on UNIXy systems) are generally available on a wide range of CPU architectures and OSes.
Hardware performance monitors, such as last branch records (LBR), can provide very accurate information about the program, and can be collected on Linux using the perf command, when it is available. However, hardware performance monitors are not always available, such as on most of the non-x86 architectures, non-Linux OSes, and usually on cloud VMs.
Lastly, there may be use cases for customized profiles, especially for profiling programs' memory behavior.
We plan to initially support pprof CPU profiles directly in the compiler and build system. This has significant usability advantages, since many Go users are already familiar with pprof profiles and infrastructure already exists to collect, view, and manipulate them. There are also [existing tools](https://github.com/google/perf_data_converter) to convert other formats, such as Linux perf, to pprof. The format is expressive enough to contain the information we need; notably the function symbols, relative offsets and line numbers necessary to make checked-in profiles stable across minor source code changes. Finally, the pprof format uses protocol buffers, so it is highly extensible and likely flexible enough to support any custom profile needs we may have in the future. It has some downsides: it’s a relatively complex format to process in the compiler, as a binary format it’s not version-control friendly, and directly producing source-stable profiles may require more runtime metadata that will make binaries larger. It will also require every invocation of the compiler to read the entire profile, even though it will discard most of the profile, which may scale poorly with the size of the application. If it turns out these downsides outweigh the benefits, we can revisit this decision and create a new intermediate profile format and tools to convert pprof to this format. Some of these downsides can be solved transparently by converting the profile to a simpler, indexed format at build time and storing this processed profile in the build cache.
## Proposal
We propose to add profile-guided optimization to Go.
### Profile sources and formats
We will initially support pprof CPU profiles. Developers can collect profiles through usual means, such as the the `runtime/pprof` or `net/http/pprof` packages. The compiler will directly read pprof CPU profiles.
In the future we may support more types of profiles (see below).
### Changes to the go command
The `go` command will search for a profile named `default.pgo` in the source directory of the main package and, if present, will supply it to the compiler for all packages in the transitive dependencies of the main package. The `go` command will report an error if it finds a `default.pgo` in any non-main package directory. In the future, we may support automatic lookup for different profiles for different build configurations (e.g., GOOS/GOARCH), but for now we expect profiles to be fairly portable between configurations.
We will also add a `-pgo=<path>` command line flag to `go build` that specifies an explicit profile location to use for a PGO build. A command line flag can be useful in the cases of
- a program with the same source code has multiple use cases, with different profiles
- build configuration significantly affects the profile
- testing with different profiles
- disabling PGO even if a profile is present
Specifically, `-pgo=<path>` will select the profile at `path`, `-pgo=auto` will select the profile stored in the source directory of the main package if there is one (otherwise no-op), and `-pgo=off` will turn off PGO entirely, even if there is a profile present in the main package's source directory.
For Go 1.20, it will be default to `off`, so in a default setting PGO will not be enabled. In a future release it will default to `auto`.
To ensure reproducible builds, the content of the profile will be considered an input to the build, and will be incorporated into the build cache key calculation and [`BuildInfo`](https://pkg.go.dev/runtime/debug#BuildInfo).
`go test` of a main package will use the same profile search rules as `go build`. For non-main packages, it will not automatically provide a profile even though it’s building a binary. If a user wishes to test (or, more likely, benchmark) a package as it is compiled for a particular binary, they can explicitly supply the path to the main package’s profile, but the `go` tool has no way of automatically determining this.
### Changes to the compiler
We will modify the compiler to support reading pprof profiles passed in from the `go` command, and modify its optimization heuristics to use this profile information. This does not require a new API. The implementation details are not included in this proposal.
Initially we plan to add PGO-based inlining. More optimizations may be added in the future.
## Compatibility
This proposal is Go 1-compatible.
## Implementation
We plan to implement a preview of PGO in Go 1.20.
Raj Barik and Jin Lin plan to contribute their work on the compiler implementation.
## Future work
### Profile collection
Currently, the `runtime/pprof` API has limitations, is not easily configurable and not extensible. For example, setting the profile rate is cumbersome (see [issue 40094](https://golang.org/issue/40094)). If we extend the profiles for PGO (e.g. adding customized events), the current API is also insufficient. One option is to add an extensible and configurable API for profile collection (see [issue 42502](https://golang.org/issue/42502)). As PGO profiles may be beyond just CPU profiles, we could also have a "collect a PGO profile" API, which enables a (possibly configurable) set of profiles to collect specifically for PGO.
The `net/http/pprof` package may be updated to include more endpoints and handlers accordingly.
We could consider adding additional command line flags to `go test`, similar to `-cpuprofile` and `-memprofile`. However, `go test -bench` is mostly for running micro-benchmarks and may not be a desired usage for PGO. Perhaps it is better to leave the flag out.
To use Linux perf profiles, the user (or the execution environment) will be responsible for starting or attaching `perf`. We could also consider collecting a small set of hardware performance counters that are commonly used and generally available in pprof profiles (see [issue 36821](https://golang.org/issue/36821)).
### Non-main packages
PGO may be beneficial to not only executables but also libraries (i.e. non-main packages). If a profile of the executable is present, it will be used for the build. If the main package does not include a profile, however, we could consider using the profiles of the libraries, to optimize the functions from those libraries (and their dependencies).
Details still need to be considered, especially for complex situations such as multiple non-main packages providing profiles. For now, we only support PGO at the executable level.
### Optimizations
In the future we may add more PGO-based optimizations, such as devirtualization, stenciling of specific generic functions, basic block ordering, and function layout. We are also considering using PGO to improve memory behavior, such as improvements on the escape analysis and allocations.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/16704-cidr-notation-no-proxy.md | # Proposal: Add support for CIDR notation in no_proxy variable
Author(s): Rudi Kramer, James Forrest
Last updated: 2017-07-10
Discussion at https://golang.org/issue/16704.
## Abstract
The old convention for no_proxy is to use a full domain name, a partial domain
name, a singular ip address or a combination.
The newer convention is to allow users to add in networks using the CIDR
notation. This proposal aims to update Go to allow for CIDR notation in
no_proxy.
## Background
There is no official spec for no_proxy but the older convention was to use only
domain names, partial domain names or singular IP addresses.
Many applications and programming languages have started to allow users to
specify networks using the CIDR notation.
## Proposal
This proposal is to update Go Net/HTTP to allow users to either add in IPv4/CIDR
or IPv6/CIDR ranges in to the no_proxy env and have Go correctly route traffic
based on these networks.
## Rationale
Networks are becoming more and more complex and with the advent of applications
like Kubernetes, it's becoming more important than ever to allow for network
ranges to be specified in Go, from the user space and the most common convention
is to use the no_proxy env.
To use the current no_proxy implementation I would need to add in 65534
individual IP addresses into no_proxy in order to resolve issues like
https://github.com/projectcalico/calico/issues/872.
## Compatibility
This change will not affect any backwards compatibility or introduce any
breaking changes to existing applications except to properly implement CIDR
notation where it is currently not working.
## Implementation
The python method for determining if the request URL is going to be bypass the proxy due it being in the no_proxy list accepts two arguments, request URL and no_proxy.
The first thing that happens is that no_proxy is either used from the passed in argument or set from the environment variables. Next the request URL is separated into the domain and port number only. Also known as the netloc.
If the no_proxy variable is set then the method checks to see if the request URL is a valid ip address.
If the request url is a valid ip address then the method iterates over all the entries in the no_proxy array.
If the no_proxy entry is a network in CIDR notation and matches the request ip then the proxy is bypassed.
If the no_proxy entry is a singular ip address and matches the request URL then the proxy is bypassed.
If the request URL is not a valid IP address then it's assumed that it's a hostname.
The method then iterates over all entries in the no_proxy array.
If the request_url hostname ends with the netloc, the proxy is bypassed.
## Open issues (if applicable)
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/17503-eliminate-rescan.md | # Proposal: Eliminate STW stack re-scanning
Author(s): Austin Clements, Rick Hudson
Last updated: 2016-10-21
Discussion at https://golang.org/issue/17503.
## Abstract
As of Go 1.7, the one remaining source of unbounded and potentially
non-trivial stop-the-world (STW) time is stack re-scanning.
We propose to eliminate the need for stack re-scanning by switching to
a *hybrid write barrier* that combines a Yuasa-style deletion write
barrier [Yuasa '90] and a Dijkstra-style insertion write barrier
[Dijkstra '78].
Preliminary experiments show that this can reduce worst-case STW time
to under 50µs, and this approach may make it practical to eliminate
STW mark termination altogether.
Eliminating stack re-scanning will in turn simplify and eliminate many
other parts of the garbage collector that exist solely to improve the
performance of stack re-scanning.
This includes stack barriers (which introduce significant complexity
in many parts of the runtime) and maintenance of the re-scan list.
Hence, in addition to substantially improving STW time, the hybrid
write barrier should also reduce the overall complexity of the garbage
collector.
## Background
The Go garbage collector is a *tricolor* concurrent collector
[Dijkstra '78].
Every object is shaded either white, grey, or black.
At the beginning of a GC cycle, all objects are white, and it is the
goal of the garbage collector to mark all reachable objects black and
then free all white objects.
The garbage collector achieves this by shading GC roots (stacks and
globals, primarily) grey and then endeavoring to turn all grey objects
black while satisfying the *strong tricolor invariant*:
> No black object may contain a pointer to a white object.
Ensuring the tricolor invariant in the presence of concurrent pointer
updates requires *barriers* on either pointer reads or pointer writes
(or both).
There are many flavors of barrier [Pirinen '98].
Go 1.7 uses a coarsened Dijkstra write barrier [Dijkstra '78], where
pointer writes are implemented as follows:
```
writePointer(slot, ptr):
shade(ptr)
*slot = ptr
```
`shade(ptr)` marks the object at `ptr` grey if it is not already grey
or black.
This ensures the strong tricolor invariant by conservatively assuming
that `*slot` may be in a black object, and ensuring `ptr` cannot be
white before installing it in `*slot`.
The Dijkstra barrier has several advantages over other types of
barriers.
It does not require any special handling of pointer reads, which has
performance advantages since pointer reads tend to outweigh pointer
writes by an order of magnitude or more.
It also ensures forward progress; unlike, for example, the Steele
write barrier [Steele '75], objects transition monotonically from
white to grey to black, so the total work is bounded by the heap size.
However, it also has disadvantages.
In particular, it presents a trade-off for pointers on stacks: either
writes to pointers on the stack must have write barriers, which is
prohibitively expensive, or stacks must be *permagrey*.
Go chooses the later, which means that many stacks must be re-scanned
during STW.
The garbage collector first scans all stacks at the beginning of the
GC cycle to collect roots.
However, without stack write barriers, we can't ensure that the stack
won't later contain a reference to a white object, so a scanned stack
is only black until its goroutine executes again, at which point it
conservatively reverts to grey.
Thus, at the end of the cycle, the garbage collector must re-scan grey
stacks to blacken them and finish marking any remaining heap pointers.
Since it must ensure the stacks don't continue to change during this,
the whole re-scan process happens *with the world stopped*.
Re-scanning the stacks can take 10's to 100's of milliseconds in an
application with a large number of active goroutines.
## Proposal
We propose to eliminate stack re-scanning and replace Go's write
barrier with a *hybrid write barrier* that combines a Yuasa-style
deletion write barrier [Yuasa '90] with a Dijkstra-style insertion
write barrier [Dijkstra '78].
The hybrid write barrier is implemented as follows:
```
writePointer(slot, ptr):
shade(*slot)
if current stack is grey:
shade(ptr)
*slot = ptr
```
That is, the write barrier shades the object whose reference is being
overwritten, and, if the current goroutine's stack has not yet been
scanned, also shades the reference being installed.
The hybrid barrier makes stack re-scanning unnecessary; once a stack
has been scanned and blackened, it remains black.
Hence, it eliminates the need for stack re-scanning and the mechanisms
that exist to support stack re-scanning, including stack barriers and
the re-scan list.
The hybrid barrier requires that objects be allocated black
(allocate-white is a common policy, but incompatible with this
barrier).
However, while not required by Go's current write barrier, Go already
allocates black for other reasons, so no change to allocation is
necessary.
The hybrid write barrier is equivalent to the "double write barrier"
used in the adaptation of Metronome used in the IBM real-time Java
implementation [Auerbach '07]. In that case, the garbage collector was
incremental, rather than concurrent, but ultimately had to deal with
the same problem of tightly bounded stop-the-world times.
### Reasoning
A full proof of the hybrid write barrier is given at the end of this
proposal.
Here we give the high-level intuition behind the barrier.
Unlike the Dijkstra write barrier, the hybrid barrier does *not*
satisfy the strong tricolor invariant: for example, a black goroutine
(a goroutine whose stack has been scanned) can write a pointer to a
white object into a black object without shading the white object.
However, it does satisfy the *weak tricolor invariant* [Pirinen '98]:
> Any white object pointed to by a black object is reachable from a
> grey object via a chain of white pointers (it is *grey-protected*).
The weak tricolor invariant observes that it's okay for a black object
to point to a white object, as long as *some* path ensures the garbage
collector will get around to marking that white object.
Any write barrier has to prohibit a mutator from "hiding" an object;
that is, rearranging the heap graph to violate the weak tricolor
invariant so the garbage collector fails to mark a reachable object.
For example, in a sense, the Dijkstra barrier allows a mutator to hide
a white object by moving the sole pointer to it to a stack that has
already been scanned.
The Dijkstra barrier addresses this by making stacks permagray and
re-scanning them during STW.
In the hybrid barrier, the two shades and the condition work together
to prevent a mutator from hiding an object:
1. `shade(*slot)` prevents a mutator from hiding an object by moving
the sole pointer to it from the heap to its stack.
If it attempts to unlink an object from the heap, this will shade
it.
2. `shade(ptr)` prevents a mutator from hiding an object by moving the
sole pointer to it from its stack into a black object in the heap.
If it attempts to install the pointer into a black object, this
will shade it.
3. Once a goroutine's stack is black, the `shade(ptr)` becomes
unnecessary.
`shade(ptr)` prevents hiding an object by moving it from the stack
to the heap, but this requires first having a pointer hidden on the
stack.
Immediately after a stack is scanned, it only points to shaded
objects, so it's not hiding anything, and the `shade(*slot)`
prevents it from hiding any other pointers on its stack.
The hybrid barrier combines the best of the Dijkstra barrier and the
Yuasa barrier.
The Yuasa barrier requires a STW at the beginning of marking to either
scan or snapshot stacks, but does not require a re-scan at the end of
marking.
The Dijkstra barrier lets concurrent marking start right away, but
requires a STW at the end of marking to re-scan stacks (though more
sophisticated non-STW approaches are possible [Hudson '97]).
The hybrid barrier inherits the best properties of both, allowing
stacks to be concurrently scanned at the beginning of the mark phase,
while also keeping stacks black after this initial scan.
## Rationale
The advantage of the hybrid barrier is that it lets a stack scan
permanently blacken a stack (without a STW and without write barriers
to the stack), which entirely eliminates the need for stack
re-scanning, in turn eliminating the need for stack barriers and
re-scan lists.
Stack barriers in particular introduce significant complexity
throughout the runtime, as well as interfering with stack walks from
external tools such as GDB and kernel-based profilers.
Also, like the Dijkstra-style write barrier, the hybrid barrier does
not require a read barrier, so pointer reads are regular memory reads;
and it ensures progress, since objects progress monotonically from
white to grey to black.
The disadvantages of the hybrid barrier are minor.
It may result in more floating garbage, since it retains everything
reachable from roots (other than stacks) at any point during the mark
phase.
However, in practice it's likely that the current Dijkstra barrier is
retaining nearly as much.
The hybrid barrier also prohibits certain optimizations: in
particular, the Go compiler currently omits a write barrier if it can
statically show that the pointer is nil, but the hybrid barrier
requires a write barrier in this case.
This may slightly increase binary size.
### Alternative barrier approaches
There are several variations on the proposed barrier that would also
work, but we believe the proposed barrier represents the best set of
trade-offs.
A basic variation is to make the Dijkstra-style aspect of the barrier
unconditional:
```
writePointer(slot, ptr):
shade(*slot)
shade(ptr)
*slot = ptr
```
The main advantage of this barrier is that it's easier to reason
about.
It directly ensures there are no black-to-white pointers in the heap,
so the only source of black-to-white pointers can be scanned stacks.
But once a stack is scanned, the only way it can get a white pointer
is by traversing reachable objects, and any white object that can be
reached by a goroutine with a black stack is grey-protected by a heap
object.
The disadvantage of this barrier is that it's effectively twice as
expensive as the proposed barrier for most of the mark phase.
Similarly, we could simply coarsen the stack condition:
```
writePointer(slot, ptr):
shade(*slot)
if any stack is grey:
shade(ptr)
*slot = ptr
```
This has the advantage of making cross-stack writes such as those
allowed by channels safe without any special handling, but prolongs
when the second shade is enabled, which slows down pointer writes.
A different approach would be to require that all stacks be blackened
before any heap objects are blackened, which would enable a pure
Yuasa-style deletion barrier:
```
writePointer(slot, ptr):
shade(*slot)
*slot = ptr
```
As originally proposed, the Yuasa barrier takes a complete snapshot of
the stack before proceeding with marking.
Yuasa argued that this was reasonable on hardware that could perform
bulk memory copies very quickly.
However, Yuasa's proposal was in the context of a single-threaded
system with a comparatively small stack, while Go programs regularly
have thousands of stacks that can total to a large amount of memory.
However, this complete snapshot isn't necessary.
It's sufficient to ensure all stacks are black before scanning any
heap objects.
This allows stack scanning to proceed concurrently, but has the
downside that it introduces a bottleneck to the parallelism of the
mark phase between stack scanning and heap scanning.
This bottleneck has downstream effects on goroutine availability,
since allocation is paced against marking progress.
Finally, there are other types of *black mutator* barrier techniques.
However, as shown by Pirinen, all possible black mutator barriers
other than the Yuasa barrier require a read barrier [Pirinen '98].
Given the relative frequency of pointer reads to writes, we consider
this unacceptable for application performance.
### Alternative approaches to re-scanning
Going further afield, it's also possible to make stack re-scanning
concurrent without eliminating it [Hudson '97].
This does not require changes to the write barrier, but does introduce
significant additional complexity into stack re-scanning.
Proposal #17505 gives a detailed design for how to do concurrent stack
re-scanning in Go.
## Other considerations
### Channel operations and go statements
The hybrid barrier assumes a goroutine cannot write to another
goroutine's stack.
This is true in Go except for two operations: channel sends and
starting goroutines, which can copy values directly from one stack to
another.
For channel operations, the `shade(ptr)` is necessary if *either* the
source stack or the destination stack is grey.
For starting a goroutine, the destination stack is always black, so
the `shade(ptr)` is necessary if the source stack is grey.
### Racy programs
In a racy program, two goroutines may store to the same pointer
simultaneously and invoke the write barrier concurrently on the same
slot.
The hazard is that this may cause the barrier to fail to shade some
object that it would have shaded in a sequential execution,
particularly given a relaxed memory model.
While racy Go programs are generally undefined, we have so far
maintained that a racy program cannot trivially defeat the soundness
of the garbage collector (since a racy program can defeat the type
system, it can technically do anything, but we try to keep the garbage
collector working as long as the program stays within the type
system).
Suppose *optr* is the value of the slot before either write to the
slot and *ptr1* and *ptr2* are the two pointers being written to the
slot.
"Before" is well-defined here because all architectures that Go
supports have *coherency*, which means there is a total order over all
reads and writes of a single memory location.
If the goroutine's respective stacks have been scanned, then *ptr1*
and *ptr2* will clearly be shaded, since those shades don't read from
memory.
Hence, the difficult case is if the goroutine's stacks have been
scanned.
In this case, the barriers reduce to:
<table>
<tr><th>Goroutine G1</th><th>Goroutine G2</th></tr>
<tr><td>
<pre>optr1 = *slot
shade(optr1)
*slot = ptr1</pre>
</td><td>
<pre>optr2 = *slot
shade(optr2)
*slot = ptr2</pre>
</td></tr>
</table>
Given that we're only dealing with one memory location, the property
of coherence means we can reason about this execution as if it were
sequentially consistent.
Given this, concurrent execution of the write barriers permits one
outcome that is not permitted by sequential execution: if both
barriers read `*slot` before assigning to it, then only *optr* will be
shaded, and neither *ptr1* nor *ptr2* will be shaded by the barrier.
For example:
<table>
<tr><th>Goroutine G1</th><th>Goroutine G2</th></tr>
<tr><td>
<pre>optr1 = *slot
shade(optr1)
*slot = ptr1
</pre>
</td><td>
<pre>optr2 = *slot
shade(optr2)
*slot = ptr2</pre>
</td></tr>
</table>
We assert that this is safe.
Suppose *ptr1* is written first.
This execution is *nearly* indistinguishable from an execution that
simply skips the write of *ptr1*.
The only way to distinguish it is if a read from another goroutine
*G3* observes *slot* between the two writes.
However, given our assumption that stacks have already been scanned,
either *ptr1* is already shaded, or it must be reachable from some
other place in the heap anyway (and will be shaded eventually), so
concurrently observing *ptr1* doesn't affect the marking or
reachability of *ptr1*.
### cgo
The hybrid barrier could be a problem if C code overwrites a Go
pointer in Go memory with either nil or a C pointer.
Currently, this operation does not require a barrier, but with any
sort of deletion barrier, this does require the barrier.
However, a program that does this would violate the cgo pointer
passing rules, since Go code is not allowed to pass memory to a C
function that contains Go pointers.
Furthermore, this is one of the "cheap dynamic checks" enabled by the
default setting of `GODEBUG=cgocheck=1`, so any program that violates
this rule will panic unless the checks have been explicitly disabled.
## Future directions
### Write barrier omission
The current write barrier can be omitted by the compiler in cases
where the compiler knows the pointer being written is permanently
shaded, such as nil pointers, pointers to globals, and pointers to
static data.
These optimizations are generally unsafe with the hybrid barrier.
However, if the compiler can show that *both* the current value of the
slot and the value being written are permanently shaded, then it can
still safely omit the write barrier.
This optimization is aided by the fact that newly allocated objects
are zeroed, so all pointer slots start out pointing to nil, which is
permanently shaded.
### Low-pause stack scans
Currently the garbage collector pauses a goroutine while scanning its
stack.
If goroutines have large stacks, this can introduce significant tail
latency effects.
The hybrid barrier and the removal of the existing stack barrier
mechanism would make it feasible to perform stack scans with only
brief goroutine pauses.
In this design, scanning a stack pauses the goroutine briefly while it
scans the active frame.
It then installs a *blocking stack barrier* over the return to the
next frame and lets the goroutine resume.
Stack scanning then continues toward the outer frames, moving the
stack barrier up the stack as it goes.
If the goroutine does return as far as the stack barrier, before it
can return to an unscanned frame, the stack barrier blocks until
scanning can scan that frame and move the barrier further up the
stack.
One complication is that a running goroutine could attempt to grow its
stack during the stack scan.
The simplest solution is to block the goroutine if this happens until
the scan is done.
Like the current stack barriers, this depends on write barriers when
writing through pointers to other frames.
For example, in a partially scanned stack, an active frame could use
an up-pointer to move a pointer to a white object out of an unscanned
frame and into the active frame.
Without a write barrier on the write that removes the pointer from the
unscanned frame, this could hide the white object from the garbage
collector.
However, with write barriers on up-pointers, this is safe.
Rather than arguing about "partially black" stacks, the write barrier
on up-pointers lets us view the stack as a sequence of separate
frames, where unscanned frames are treated as part of the *heap*.
Writes without write barriers can only happen to the active frame, so
we only have to view the active frame as the stack.
This design is technically possible now, but the complexity of
composing it with the existing stack barrier mechanism makes it
unappealing.
With the existing stack barriers gone, the implementation of this
approach becomes relatively straightforward.
It's also generally simpler than the existing stack barriers in many
dimensions, since there are at most two stack barriers per goroutine
at a time, and they are present only during the stack scan.
### Strictly bounded mark termination
This proposal goes a long way toward strictly bounding the time spent
in STW mark termination, but there are some other known causes of
longer mark termination pauses.
The primary cause is a race that can trigger mark termination while
there is still remaining heap mark work.
This race and how to resolve it are detailed in the "Mark completion
race" appendix.
### Concurrent mark termination
With stack re-scanning out of mark termination, it may become
practical to make the remaining tasks in mark termination concurrent
and eliminate the mark termination STW entirely.
On the other hand, the hybrid barrier may reduce STW so much that
completely eliminating it is not a practical concern.
The following is a probably incomplete list of remaining mark
termination tasks and how to address them.
Worker stack scans can be eliminated by having workers self-scan
during mark.
Scanning the finalizer queue can be eliminated by adding explicit
barriers to `queuefinalizer`.
Without these two scans (and with the fix for the mark completion race
detailed in the appendix), mark termination will produce no mark work,
so finishing the work queue drain also becomes unnecessary.
`mcache`s can be flushed lazily at the beginning of the sweep phase
using rolling synchronization.
Flushing the heap profile can be done immediately at the beginning of
sweep (this is already concurrent-safe, in fact).
Finally, updating global statistics can be done using atomics and
possibly a global memory barrier.
### Concurrent sweep termination
Likewise, it may be practical to eliminate the STW for sweep
termination.
This is slightly complicated by the fact that the hybrid barrier
requires a global memory fence at the beginning of the mark phase to
enable the write barrier and ensure all pointer writes prior to
enabling the write barrier are visible to the write barrier.
Currently, the STW for sweep termination and setting up the mark phase
accomplishes this.
If we were to make sweep termination concurrent, we could instead use
a ragged barrier to accomplish the global memory fence, or the
[`membarrier` syscall](http://man7.org/linux/man-pages/man2/membarrier.2.html)
on recent Linux kernels.
## Compatibility
This proposal does not affect the language or any APIs and hence
satisfies the Go 1 compatibility guidelines.
## Implementation
Austin plans to implement the hybrid barrier during the Go 1.8
development cycle.
For 1.8, we will leave stack re-scanning support in the runtime for
debugging purposes, but disable it by default using a `GODEBUG`
variable.
Assuming things go smoothly, we will remove stack re-scanning support
when the tree opens for Go 1.9 development.
The planned implementation approach is:
1. Fix places that do unusual or "clever" things with memory
containing pointers and make sure they cooperate with the hybrid
barrier.
We'll presumably find more of these as we debug in later steps, but
we'll have to make at least the following changes:
1. Ensure barriers on stack-to-stack copies for channel sends and
starting goroutines.
2. Check all places where we clear memory since the hybrid barrier
requires distinguishing between clearing for initialization and
clearing already-initialized memory.
This will require a barrier-aware `memclr` and disabling the
`duffzero` optimization for pointers with types.
3. Check all uses of unmanaged memory in the runtime to make sure
it is initialized properly.
This is particularly important for pools of unmanaged memory
such as the fixalloc allocator that may reuse memory.
2. Implement concurrent scanning of background mark worker stacks.
Currently these are placed on the rescan list and *only* scanned
during mark termination, but we're going to disable the rescan
list.
We could arrange for background mark workers to scan their own
stacks, or explicitly keep track of heap pointers on background
mark worker stacks.
3. Modify the write barrier to implement the hybrid write barrier and
the compiler to disable write barrier elision optimizations that
aren't valid for the hybrid barrier.
4. Disable stack re-scanning by making rescan enqueuing a no-op unless
a `GODEBUG` variable is set.
Likewise, disable stack barrier insertion unless this variable is
set.
5. Use checkmark mode and stress testing to verify that no objects are
missed.
6. Wait for the Go 1.9 development cycle.
7. Remove stack re-scanning, the rescan list, stack barriers, and the
`GODEBUG` variable to enable re-scanning.
Possibly, switch to low-pause stack scans, which can reuse some of
the stack barrier mechanism.
## Appendix: Mark completion race
Currently, because of a race in the mark completion condition, the
garbage collector can begin mark termination when there is still
available mark work.
This is safe because mark termination will finish draining this work,
but it makes mark termination unbounded.
This also interferes with possible further optimizations that remove
all mark work from mark termination.
Specifically, the following interleaving starts mark termination
without draining all mark work:
Initially `workers` is zero and there is one buffer on the full list.
<table>
<tr><th>Thread 1</th><th>Thread 2</th></tr>
<tr><td><pre>
inc(&workers)
gcDrain(&work)
=> acquires only full buffer
=> adds more pointer to work
work.dispose()
=> returns buffer to full list
n := dec(&workers) [n=0]
if n == 0 && [true]
full.empty() && [true]
markrootNext >= markrootJobs { [true]
startMarkTerm()
}
</pre></td><td><pre>
inc(&workers)
gcDrain(&work)
=> acquires only full buffer
=> adds more pointers to work
...
</pre></td></tr></table>
In this example, a race between observing the `workers` count and
observing the state of the full list causes thread 1 to start mark
termination prematurely.
Simply checking `full.empty()` before decrementing `workers` exhibits
a similar race.
To fix this race, we propose introducing a single atomic non-zero
indicator for the number of non-empty work buffers.
Specifically, this will count the number of work caches that are
caching a non-empty work buffer plus one for a non-empty full list.
Many buffer list operations can be done without modifying this count,
so we believe it will not be highly contended.
If this does prove to be a scalability issue, there are well-known
techniques for scalable non-zero indicators [Ellen '07].
## Appendix: Proof of soundness, completeness, and boundedness
<!-- TODO: Show that the only necessary memory fence is a global
!-- store/load fence between enabling the write barrier and
!-- blackening to ensure visibility of all pointers and the write
!-- barrier flag.
!-->
This section argues that the hybrid write barrier satisfies the weak
tricolor invariant, and hence is sound in the sense that it does not
collect reachable objects; that it terminates in a bounded number of
steps; and that it eventually collects all unreachable objects, and
hence is complete.
We have also further verified these properties using a
[randomized stateless model](https://github.com/aclements/go-misc/blob/master/go-weave/models/yuasa.go).
The following proofs consider global objects to be a subset of the
heap objects.
This is valid because the write barrier applies equally to global
objects.
Similarly, we omit explicit discussion of nil pointers, since the nil
pointer can be considered an always-black heap object of zero size.
The hybrid write barrier satisfies the *weak tricolor invariant*
[Pirinen '98].
However, rather than directly proving this, we prove that it satisfies
the following *modified tricolor invariant*:
> Any white object pointed to by a black object is grey-protected by a
> heap object (reachable via a chain of white pointers from the grey
> heap object).
> That is, for every *B -> W* edge, there is a path *G -> W₁ -> ⋯ ->
> Wₙ -> W* where *G* is a heap object.
This is identical to the weak tricolor invariant, except that it
requires that the grey-protector is a heap object.
This trivially implies the weak tricolor invariant, but gives us a
stronger basis for induction in the proof.
<!-- Thoughts on how to simplify the proof:
Perhaps define a more general notion of a "heap-protected object",
which is either black, grey, or grey-protected by a heap object.
-->
Lemma 1 establishes a simple property of paths we'll use several
times.
**Lemma 1.** In a path *O₁ -> ⋯ -> Oₙ* where *O₁* is a heap object,
all *Oᵢ* must be heap objects.
**Proof.** Since *O₁* is a heap object and heap objects can only point
to other heap objects, by induction, all *Oᵢ* must be heap objects.
∎
In particular, if some object is grey-protected by a heap object,
every object in the grey-protecting path must be a heap object.
Lemma 2 extends the modified tricolor invariant to white objects that
are *indirectly* reachable from black objects.
**Lemma 2.** If the object graph satisfies the modified tricolor
invariant, then every white object reachable (directly or indirectly)
from a black object is grey-protected by a heap object.
<!-- Alternatively: every object reachable from a black object is
either black, grey, or grey-protected by a heap object. -->
**Proof.** Let *W* be a white object reachable from black object *B*
via simple path *B -> O₁ -> ⋯ -> Oₙ -> W*.
Note that *W* and all *Oᵢ* and must be heap objects because stacks can
only point to themselves (in which case it would not be a simple path)
or heap objects, so *O₁* must be a heap object, and by lemma 1, the
rest of the path must be heap objects.
Without loss of generality, we can assume none of *Oᵢ* are black;
otherwise, we can simply reconsider using the shortest path suffix
that starts with a black object.
If there are no *Oᵢ*, *B* points directly to *W* and the modified
tricolor invariant directly implies that *W* is grey-protected by a
heap object.
If any *Oᵢ* is grey, then *W* is grey-protected by the last grey
object in the path.
Otherwise, all *Oᵢ* are white.
Since *O₁* is a white object pointed to by a black object, *O₁* is
grey-protected by some path *G -> W₁ -> ⋯ -> Wₙ -> O₁* where *G* is a
heap object.
Thus, *W* is grey-protected by *G -> W₁ -> ⋯ -> Wₙ -> O₁ -> ⋯ -> Oₙ ->
W*.
∎
Lemma 3 builds on lemma 2 to establish properties of objects reachable
by goroutines.
**Lemma 3.** If the object graph satisfies the modified tricolor
invariant, then every white object reachable by a black goroutine (a
goroutine whose stack has been scanned) is grey-protected by a heap
object.
**Proof.** Let *W* be a white object reachable by a black goroutine.
If *W* is reachable from the goroutine's stack, then by lemma 2 *W* is
grey-protected by a heap object.
Otherwise, *W* must be reachable from a global *X*.
Let *O* be the last non-white object in the path from *X* to *W* (*O*
must exist because *X* itself is either grey or black).
If *O* is grey, then *O* is a heap object that grey-protects *W*.
Otherwise, *O* is black and by lemma 2, *W* is grey-protected by some
heap object.
∎
Now we're ready to prove that the hybrid write barrier satisfies the
weak tricolor invariant, which implies it is *sound* (it marks all
reachable objects).
**Theorem 1.** The hybrid write barrier satisfies the weak tricolor
invariant.
**Proof.** We first show that the hybrid write barrier satisfies the
modified tricolor invariant.
The proof follows by induction over the operations that affect the
object graph or its coloring.
*Base case.* Initially there are no black objects, so the invariant
holds trivially.
*Write pointer in the heap.* Let *obj.slot := ptr* denote the write,
where *obj* is in the heap, and let *optr* denote the value of
*obj.slot* prior to the write.
Let *W* be a white object pointed to by a black object *B* after the
heap write.
There are two cases:
1. *B ≠ obj*: *W* was pointed to by them same black object *B* before
the write, and, by assumption, *W* was grey-protected by a path *G
-> W₁ -> ⋯ -> Wₙ -> W*, where *G* is a heap object.
If none of these edges are *obj.slot*, then *W* is still protected
by *G*.
Otherwise, the path must have included the edge *obj -> optr* and,
since the write barrier shades *optr*, *W* is grey-protected by
*optr* after the heap write.
2. *B = obj*: We first establish that *W* was grey-protected before
the write, which breaks down into two cases:
1. *W = ptr*: The goroutine must be black, because otherwise the
write barrier shades *ptr*, so it is not white.
*ptr* must have been reachable by the goroutine for it to write
it, so by lemma 3, *ptr* was grey-protected by some heap object
*G* prior to the write.
2. *W ≠ ptr*: *B* pointed to *W* before the write and, by
assumption, *W* was grey-protected by some heap object *G*
before the write.
Because *obj* was black before the write, it could not be in the
grey-protecting path from *G* to *W*, so this write did not affect
this path, so *W* is still grey-protected by *G*.
*Write pointer in a stack.* Let *stk.slot := ptr* denote the write.
Let *W* be a white object pointed to by a black object *B* after the
stack write.
We first establish that *W* was grey-protected before the stack write,
which breaks down into two cases:
1. *B = stk* and *W = ptr*: *W* may not have been pointed to by a
black object prior to the stack write (that is, the write may
create a new *B -> W* edge).
However, *ptr* must have been reachable by the goroutine, which is
black (because *B = stk*), so by lemma 3, *W* was grey-protected by
some heap object *G* prior to the write.
2. Otherwise: *W* was pointed to by the same black object *B* prior to
the stack write, so, by assumption, *W* was grey-protected by some
heap object *G* prior to the write.
By lemma 1, none of the objects in the grey-protecting path from heap
object *G* to *W* can be a stack, so the stack write does not modify
this path.
Hence, *W* is still grey-protected after the stack write by *G*.
*Scan heap object.* Let *obj* denote the scanned object.
Let *W* be an object pointed to by a black object *B* after the scan.
*B* cannot be *obj* because immediately after the scan, *obj* does not
point to any white objects.
Thus, *B* must have been black and pointed to *W* before the scan as
well, so, by assumption, *W* was grey-protected by a path *G -> W₁ ->
⋯ -> Wₙ -> W*, where *G* is a heap object.
If some *Wᵢ* was an object pointed to by *obj*, then *W* is
grey-protected by *Wᵢ* after the scan.
Otherwise, *W* is still grey-protected by *G*.
*Stack scan.* This case is symmetric with scanning a heap object.
<!-- Old direct proof of stack scans:
*Stack scan.* Let *W* be an object pointed to by a black object *B*
after the stack scan.
Even though scanning stack *stk* blackens *stk*, *B* cannot be *stk*
because scanning greys all objects directly referenced by *stk*.
Hence, *W* was pointed to by the same black object *B* before the
stack scan, and by assumption was grey-protected by some path *G -> W₁
-> ⋯ -> Wₙ -> W* where *G* is a heap object.
By lemma 1, none of the objects in the grey-protecting path can be a
stack, so after the stack scan, *W* is either still grey-protected by
*G*, or some *Wᵢ* was greyed by the stack scan and *W* is now
grey-protected by *Wᵢ*.
-->
*Allocate an object.* Since new objects are allocated black and point
to nothing, the invariant trivially holds across allocation.
*Create a stack.* This case is symmetric with object allocation
because new stacks start out empty and hence are trivially black.
This completes the induction cases and shows that the hybrid write
barrier satisfies the modified tricolor invariant.
Since the modified tricolor invariant trivially implies the weak
tricolor invariant, the hybrid write barrier satisfies the weak
tricolor invariant.
∎
The garbage collector is also *bounded*, meaning it eventually
terminates.
**Theorem 2.** A garbage collector using the hybrid write barrier
terminates in a finite number of marking steps.
**Proof.** We observe that objects progress strictly from white to
grey to black and, because new objects (including stacks) are
allocated black, the total marking work is bounded by the number of
objects at the beginning of garbage collection, which is finite.
∎
Finally, the garbage collector is also *complete*, in the sense that
it eventually collects all unreachable objects.
This is trivial from the fact that the garbage collector cannot mark
any objects that are not reachable when the mark phase starts.
## References
[Auerbach '07] J. Auerbach, D. F. Bacon, B. Blainey, P. Cheng, M.
Dawson, M. Fulton, D. Grove, D. Hart, and M. Stoodley. Design and
implementation of a comprehensive real-time java virtual machine. In
*Proceedings of the 7th ACM & IEEE international conference on
Embedded software (EMSOFT '07)*, 249–258, 2007.
[Dijkstra '78] E. W. Dijkstra, L. Lamport, A. J. Martin, C. S.
Scholten, and E. F. Steffens. On-the-fly garbage collection: An
exercise in cooperation. *Communications of the ACM*, 21(11), 966–975,
1978.
[Ellen '07] F. Ellen, Y. Lev, V. Luchango, and M. Moir. SNZI: Scalable
nonzero indicators. In *Proceedings of the 26th ACM SIGACT-SIGOPS
Symposium on Principles of Distributed Computing*, Portland, OR,
August 2007.
[Hudson '97] R. L. Hudson, R. Morrison, J. E. B. Moss, and D. S.
Munro. Garbage collecting the world: One car at a time. In *ACM
SIGPLAN Notices* 32(10):162–175, October 1997.
[Pirinen '98] P. P. Pirinen. Barrier techniques for incremental
tracing. In *ACM SIGPLAN Notices*, 34(3), 20–25, October 1998.
[Steele '75] G. L. Steele Jr. Multiprocessing compactifying garbage
collection. *Communications of the ACM*, 18(9), 495–508, 1975.
[Yuasa '90] T. Yuasa. Real-time garbage collection on general-purpose
machines. *Journal of Systems and Software*, 11(3):181–198, 1990.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/14386-zip-package-archives.md | # Proposal: Zip-based Go package archives
Author: Russ Cox
Last updated: February 2016
Discussion at https://golang.org/issue/14386.
## Abstract
Go package archives (the `*.a` files manipulated by `go tool pack`) use the old Unix ar archive format.
I propose to change both Go package archives and Go object files to use the more standard zip archive format.
In contrast to ar archives, zip archives admit efficient random access to individual files within the archive
and also allow decisions about compression on a per-file basis.
The result for Go will be cleaner access to the parts of a package archive
and the ability later to add compression of individual parts
as appropriate.
## Background
### Go object files and package archives
The Go toolchain stores compiled packages in archives
written in the Unix ar format used by traditional C toolchains.
Before continuing, two notes on terminology:
- An archive (a `*.a` file), such as an ar or zip file, is a file that contains other files.
To avoid confusion, this design document uses the term _archive_
for the archive file itself and reserves the term _file_ exclusively
for other kinds of files, including the files inside the archive.
- An _object file_ (a `*.o` file) holds machine code corresponding to a source file;
the linker merges multiple object files into a final executable.
Examples of object files include the ELF, Mach-O, and PE
object files used by Linux, OS X, and Windows systems, respectively.
We refer to these as _system object files_.
Go uses its own object file format, which we refer to as _Go object files_;
that format is unchanged by this proposal.
In a traditional C toolchain, an archive contains a file
named `__.SYMDEF` and then one or more object files (`.o` files)
containing compiled code; each object file corresponds to a different C or assembly source file.
The `__.SYMDEF` file is a symbol index a mapping from symbol name (such as `printf`)
to the specific object file containing that symbol (such as `print.o`).
A traditional C linker reads the symbol index to learn which of the
object files it needs to read from the archive; it can completely ignore
the others.
Go has diverged over time from the C toolchain way of using ar archives.
A Go package archive contains
package metadata in a file named `__.PKGDEF`,
one or more Go object files,
and zero or more system object files.
The Go object files are generated by the compiler
(one for all the Go source code in the package)
and by the assembler
(one for each assembly source file).
The system object files are generated by the system C compiler
(one for each `*.c` file in the package directory, plus a few for
C source files generated by cgo),
or (less commonly) are direct copies of `*.syso` files in the package source directory.
Because the Go linker does dead code elimination at a symbol level rather than
at the object file level, a traditional C symbol index is not useful
and not included in the Go package archive.
Long before Go 1, the Go compiler read a single Go source file
and wrote a single Go object file, much like a C compiler.
Each object file contained a fragment of package metadata
contributed by that file.
After running the compiler separately on each Go source file
in a package, the build system (`make`) invoked the archiver
(`6ar`, even on non-amd64 systems) to create an archive
containing all the object files.
As part of creating the archive, the archiver copied and merged
the metadata fragments from the many Go object files into
the single `__.PKGDEF` file.
This had the effect of storing the package metdata in the archive twice,
although the different copies ended up being read by different tools.
The copy in `__.PKGDEF` was read by later compilations importing
the package, and the fragmented copy spread across the Go object files
was read by the linker (which needed to read the object files anyway)
and used to detect version skew (a common problem due to the
use of per-directory makefiles).
By the time of Go 1, the Go compiler read all the Go source files for a package
together and wrote a single Go object file.
As before, that object file contained (now complete) package metadata,
and the archiver (now `go tool pack`) extracted that metadata into the
`__.PKGDEF` file.
The package still contained two copies of the package metadata.
Equally embarassing, most package archives
(those for Go packages with no assembly or C)
contained only a single `*.o` file, making the archiving step
a mostly unnecessary, trivial copy of the data through the file system.
Go 1.3 added a new `-pack` option to the Go compiler, directing it to
write a Go package archive containing `__.PKGDEF` and a `_go_.o` _without_
package metadata.
The go command used this option to create the initial package archive.
If the package had no assembly or C sources, there was no need for
any more work on the archive.
If the package did have assembly or C sources, those additional objects
needed to be appended to the archive, which could be done without
copying or rewriting the existing data.
Adopting `-pack` eliminated the duplicate copy of the package metadata,
and it also removed from the linker the job of detecting version skew,
since the package metadata was no longer in the object files the linker read.
The package metadata itself contains multiple sections used by different programs:
a unique build ID, needed by the go command;
the package name, needed by the compiler during import, but also needed by the linker;
detailed information about exported API, needed by the compiler during import;
and directives related to cgo, needed by the linker.
The entire metadata format is textual, with sections separated by `$$` lines.
Today, the situation is not much different from that of Go 1.3.
There are two main problems.
First, the individual package metadata sections are difficult to
access independently, because of the use of ad-hoc framing
inside the standard ar-format framing.
The inner framing is necessary in the current system in part
because metadata is still sometimes (when not using `-pack`)
stored in Go object files,
and those object files have no outer framing.
The line-oriented nature of the inner framing is a hurdle
for converting to a more compact binary format for the export data.
In a cleaner design, the different metadata sections would be stored
in different files in the Go package archive, eliminating the inner framing.
Cleaner separation would allow different tools to access only the
data they needed without needing to process unrelated data.
The go command, the compiler, and the linker all read `__.PKGDEF`,
but only the compiler needs all of it.
Distributed build systems can also benefit from splitting a package
archive into two halves, one used by the compiler to satisfy imports
and one used by the linker to generate the final executable.
The build system can then ship just the compiler-relevant
data to machines running the compiler and just the linker-relevant
data to machines running the linker.
In particular, the compiler does not need the Go object files,
and the linker does not need the Go export data;
both savings can be large.
Second, there is no simple way to enable compression for certain
files in the Go package archive.
It could be worthwhile to compress the Go export data
and Go object files, to save disk space as well as I/O time
(not just disk I/O but potentially also network I/O,
when using network file systems or distributed build systems).
### Archive file formats
The ar archive format is simplistic: it begins with a distinguishing 8-byte header (`!<arch>\n`)
and then contains a sequence of files.
Each file has its own 60-byte header giving the file name (up to 16 bytes),
modification time, user and group IDs, permission bits, and size.
That header is followed by size bytes of data.
If size is odd, the data is followed by a single padding byte
so that file headers are always 16-bit aligned within the archive.
There is no table of contents: to find the names of all files in the archive,
one must read the entire archive (perhaps seeking past file content).
There is no compression.
Additional file entries can simply be appended to the end of an existing archive.
The zip archive format is much more capable, but only a little more complex.
A zip archive consists of a sequence of files followed by a table of contents.
Each file is stored as a header giving metadata such as the file name and data encoding,
followed by encoded file data,
followed by a file trailer.
The two standard encodings are “store” (raw, uncompressed)
and “deflate” (compressed using the same algorithm as zlib and gzip).
The table of contents at the end of the zip archive is a contiguous list of file headers
including offsets to the actual file data, making it efficient to access a particular
file in the archive.
As mentioned above, the zip format supports but does not require compression.
Appending to a zip archive is simple, although not as trivial as appending to an ar archive.
The table of contents must be saved, then new file entries are written
starting where the table of contents used to be, and then a new, expanded
table of contents is written.
Importantly, the existing files are left in place during this process,
making it about as efficient as adding to an ar format archive.
## Proposal
To address the problems described above,
I propose to change Go package archives
to use the zip format instead of the current ar format,
at the same time separating the current `__.PKGDEF` metadata file
into multiple files according to what tools use process the data.
To avoid the need to preserve the current custom framing in
Go object files, I propose to stop writing Go object files at all,
except inside Go package archives.
The toolchain would still generate `*.o` files at the times it does today,
but the bytes inside those files would be identical to those inside
a Go package archive.
Although the bytes stored in the `*.a` and `*.o` files would be changing,
there would be no visible changes in the rest of the toolchain.
In particular, the file names would stay the same,
as would the commands used to manipulate and inspect archives.
The only differences would be in the encoding used within the file.
A Go package archive would be a zip-format archive containing the following files:
_go_/header
_go_/export
_go_/cgo
_go_/*.obj
_go_/*.sysobj
The `_go_/header` file is required, must be first in the archive, must be uncompressed,
and is of bounded size.
The header content is a sequence of at most four textual metadata lines. For example:
go object darwin amd64 devel +8b5a9bd Tue Feb 2 22:46:19 2016 -0500 X:none
build id "4fe8e8c8bc1ea2d7c03bd08cf3025e302ff33742"
main
safe
The `go object` line must be first and identifies the operating system, architecture,
and toolchain version (including enabled experiments) of the package archive.
This line is today the first line of `__.PKGDEF`, and its uses remain the same:
the compiler and linker both refuse to use package archives with an unexpected
`go object` line.
The remaining lines are optional, but whichever ones are present must appear
in the order given here.
The `build id` line specifies the build ID, an opaque hash used by the build system
(typically the go command) as a version identifier, to help detect when a package
must be rebuilt.
This line is today the second line of `__.PKGDEF`, when present.
The `main` line is present if the package archive is `package main`, making it a valid
top-level input for the linker. The command `go tool link x.a` will refuse to build
a binary from `x.a` if that package's header does not have a `main` line.
The `safe` line is present if the code was compiled with `-u`, indicating that it has
been checked for safety. When the linker is invoked with `-u`, it refuses to use
any unsafe package archives during the link.
This mode is experimental and carried forward from earlier versions of Go.
The `main` and `safe` lines are today derived from the first line of the export data,
which echoes the package statement from the Go source code, followed by the
word `safe` for safe packages.
The new header omits the name of non-main packages entirely in order
to ensure that the header size is bounded no matter how long a package name
appears in the package's source code.
More header lines may be added to the end of this list in the future,
always being careful to keep the overall header size bounded.
The `_go_/export` file is usually required (details below), must be second in the archive,
and holds a description of the package's exported API
for use by a later compilation importing the package.
The format of the export data is not specified here,
but as mentioned above part of the motivation for this design
is to make it possible to use a binary export data format
and to apply compression to it.
The export data corresponds to the top of the `__.PKGDEF` file,
excluding the initial `go object` and `build id` lines and stopping at the first `$$` line.
The `_go_/cgo` file is optional and holds cgo-related directives for the linker.
The format of these directives is not specified here.
This data corresponds to the end of the Go object file metadata,
specifically the lines between the third `$$` line and the terminating `!` line.
Each of the `_go_/*.obj` files is a traditional Go object file,
holding machine code, data, and relocations processed by the linker.
Each of the `_go_/*.sysobj` files is a system object file,
either generated during the build by the system C compiler
or copied verbatim from a `*.syso` file in the package source directory
(see the [go command documentation](https://golang.org/cmd/go/#hdr-File_types)
for more about `*.syso` files).
It is valid today and remains valid in this proposal for
multiple files within an archive to have the same name.
This simplifies the generation and combination of package files.
## Rationale
### Zip format
As discussed in the background section, the most fundamental problem
with the current archive format as used by Go is that all package metadata is
combined into the single `__.PKGDEF` file.
This is done for many reasons, all addressed by the use of zip files.
One reason for the single `__.PKGDEF` file is that there is
no efficient random access to files inside ar archives.
The first file in the archive is the only one that can be accessed
with a fixed number of disk I/O operations, and so it is often
given a distinguished role.
The zip format has a contiguous table of contents,
making it possible to access any file in the archive in a
fixed number of disk I/O operations.
This reduces the pressure to keep all important data in the first file.
It is still possible, however, to read a zip file from the beginning of the file,
without first consulting the table of contents.
The requirements that `_go_/header` be first, be uncompressed,
and be bounded in size exist precisely to make it possible to read
the package archive header by reading nothing but a prefix of the file
(say, the first kilobyte).
The requirement that `_go_/export` be second also makes it possible
for a compiler to read the header and export data without using
any disk I/O to read the table of contents.
As mentioned above, another reason for the single `__.PKGDEF` file
is that the metadata is stored not just in Go package archives but also
in Go object files, as written by `go tool compile` (without `-pack`) or `go tool asm`,
and those object files have no archive framing available.
Changing `*.o` files to reuse the Go package archive format
eliminates the need for a separate framing solution for metadata in `*.o` files.
Zip also makes it possible to make different compression decisions for
different files within an archive. This is important primarily because
we would like the option of compressing the export data and Go object files
but likely cannot compress the system object files, because reading them
requires having random access to the data.
It is also useful to be able to arrange that the header can be read
without the overhead of decompression.
We could take the current archive format and add a table of contents
and support for per-file compression methods, but why reinvent the wheel?
Zip is a standard format, and the Go standard library already supports it well.
The only other standard archive format in the Go standard library is the Unix tar format.
Tar is a marvel: it adds significant complexity to the ar format
without addressing any of the architectural problems that make ar unsuitable for our purposes.
In some circles, zip has a bad reputation.
I speculate that this is due to zip's historically strong association with MS-DOS
and historically weak support on Unix systems.
Reputation aside, the zip format is clearly documented,
is well designed, and avoids the architectural problems of ar and tar.
It is perhaps worth noting that Java .jar files also use the zip format internally,
and that seems to be working well for them.
### File names
The names of files within the package archives all begin with `_go_/`.
This is done so that Go packages are easier to distinguish from other zip files
and also so that an accidental `unzip x.a` is easier to clean up.
Distinguishing Go object files from system object files by name is new in this proposal.
Today, tools assume that `__.PKGDEF` is the only non-object file in the
package archive, and each file must be inspected to find out what kind
of object file it is (Go object or system object).
The suffixes make it possible to know both that a particular file
is an object file and what kind it is, without reading the file data.
The suffixes also isolate tools from each other, making it easier
to extend the archive with new data in new files.
For example, if some other part of the toolchain needs to add a
new file to the archive, the linker will automatically ignore it
(assuming the file name does not end in `.obj` or `.sysobj`).
### Compression
Go export data and Go object files can both be quite large.
I ran experiment on a large program at Google, built with Go 1.5.
I gathered all the package archives linked into that program
corresponding to Go source code generated from protocol buffer definitions
(which can be quite large), and I ran the standard `gzip -1`
(fastest, least compression) on those files.
That resulted in a 7.5x space savings for the packages.
Clearly there are significant space improvements available
with only modest attempts at compression.
I ran another experiment on the main repo toward the end of the Go 1.6 cycle.
I changed the existing package archive format to force
compression of `__.PKGDEF` and all Go object files,
using Go's compress/gzip at compression level 1,
when writing them to the package archive,
and I changed all readers to know to decompress them when
reading them back out of the archive.
This resulted in a 4X space savings for packages on disk:
the $GOROOT/pkg tree after make.bash shrunk from about 64 MB to about 16 MB.
The cost was an approximately 10% slowdown in make.bash time:
the roughly two minutes make.bash normally took on my laptop
was extended by about 10 seconds.
My experiment was not as efficient in its use of compression as it could be.
For example, the linker went to the trouble to open and decompress
the beginning of `__.PKGDEF` just to read the few bits it actually needed.
Independently, Klaus Post has been working on improving the
speed of Go's compress/flate package (used by archive/zip,
compress/gzip, and compress/zlib) at all compression levels,
as well as the efficiency of the decompressor.
He has also replaced compression level 1 by a port of
the logic from Google's Snappy (formerly Zippy) algorithm,
which was designed specifically for compression speed.
Unlike Snappy, though, his port produces DEFLATE-compatible
output, so it can be used by a compressor without requiring
a non-standard decompressor on the other side.
From the combination of a more careful separation of data within
the package archive and Klaus's work on compression speed,
I expect the slowdown in make.bash due to
compression can be reduced to under 5% (for a 4X space savings!).
Of course, if the cost of compression is determined to be not paid for
by the space savings it brings, it is possible to use zip with no
compression at all.
The other benefits of the zip format still make this a worthwhile cleanup.
## Compatibility
The toolchain is not subject to the [compatibility guidelines](https://golang.org/doc/go1compat).
Even so, this change is intended to be invisible to any use case that does not actually open a
package archive or object files and read the raw bytes contained within.
## Implementation
The implementation proceeds in these steps:
1. Implementation of a new package `cmd/internal/pkg`
for manipulating zip-format package archives.
2. Replacement of the ar-format archives with zip-format archives,
but still containing the old files (`__.PKGDEF` followed by any number
of object files of unspecified type).
3. Implementation of the new file structure within the archives:
the separate metadata files and the forced suffixes for Go object files
and system object files.
4. Addition of compression.
Steps 1, 2, and 3 should have no performance impact on build times.
We will measure the speed of make.bash to confirm this.
These steps depend on some extensions to the archive/zip package suggested by Roger Peppe.
He has implemented these and intends to send them early in the Go 1.7 cycle.
Step 4 will have a performance impact on build times.
It must be measured to make a proper engineering decision
about whether and how much to compress.
This step depends on the compress/flate performance improvements by Klaus Post described above.
He has implemented these and intends to send them early in the Go 1.7 cycle.
I will do this work early in the Go 1.7 cycle, immediately following Roger's and Klaus's work.
I have a rough but working prototype of steps 1, 2, and 3 already.
Enabling compression in the zip writer is a few lines of code beyond that.
Part of the motivation for doing this early in Go 1.7 is to make it possible
for Robert Griesemer to gather performance data for his new binary
export data format and enable that for Go 1.7 as well.
The binary export code is currently bottlenecked by the need to escape
and unescape the data to avoid generating a terminating `\n$$` sequence.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/draft-gobuild.md | # Bug-resistant build constraints — Draft Design
Russ Cox\
June 30, 2020
This is a **Draft Design**, not a formal Go proposal,
because it describes
a potential [large change](https://research.swtch.com/proposals-large).
The goal of circulating this draft design is to collect feedback
to shape an intended eventual proposal.
We are using this change to experiment with new ways to
[scale discussions](https://research.swtch.com/proposals-discuss)
about large changes.
For this change, we will use
[a Go Reddit thread](https://golang.org/s/go-build-reddit)
to manage Q&A, since Reddit's threading support
can easily match questions with answers
and keep separate lines of discussion separate.
There is also a [video presentation](https://golang.org/s/go-build-video) of this draft design.
The [prototype code](https://golang.org/s/go-build-code) is also available for trying out.
## Abstract
We present a possible plan to
transition from the current `// +build` lines for build tag selection
to new `//go:build` lines that use standard boolean expression syntax.
The plan includes relaxing the possible placement of `//go:build` lines
compared to `// +build` lines, as well as rejecting misplaced `//go:build` lines.
These changes should make build constraints easier to use
and reduce time spent debugging mistakes.
The plan also includes a graceful transition from `// +build` to `//go:build`
syntax that avoids breaking the Go ecosystem.
This design draft is based on a preliminary discussion on
[golang.org/issue/25348](https://golang.org/issue/25348),
but discussion of this design draft should happen on
[the Go Reddit thread](https://golang.org/s/go-build-reddit).
## Background
It can be necessary to write different Go code for different compilation contexts.
Go’s solution to this problem is conditional compilation at the file level:
each file is either in the build or not.
(Compared with the C preprocessor’s conditional selection of individual lines,
selection of individual files is easier to understand and requires no special support
in tools that parse single source files.)
Go refers to the current operating system as `$GOOS`
and the current architecture as `$GOARCH`.
This section uses the generic names GOOS and GOARCH
to stand in for any of the specific names (windows, linux, 386, amd64, and so on).
When the `go/build` package was written in August 2011,
it added explicit support for the convention that files
named `*_GOOS.*`, `*_GOARCH.*`, or `*_GOOS_GOARCH.*`
only compiled on those specific systems.
Until then, the convention was only a manual one,
maintained by hand in the package Makefiles.
For more complex situations, such as files that applied
to multiple operating systems (but not all),
build constraints were introduced in September 2011.
### Syntax
Originally, the arguments to a build constraint
were a list of alternatives, each of which took
one of three possible forms:
an operating system name (GOOS),
an architecture (GOARCH),
or both separated by a slash (GOOS/GOARCH).
[CL 5018044](https://golang.org/cl/5018044)
used a `//build` prefix,
but a followup discussion on
[CL 5011046](https://codereview.appspot.com/5011046)
while updating the tree to use the comments
led to the syntax changing to `// +build`
in [CL 5015051](https://golang.org/cl/5015051).
For example, this line indicated that the file should build
on Linux for any architecture, or on Windows only for 386:
// +build linux windows/386
That is, each line listed a set of OR’ed conditions.
Because each line applied independently, multiple lines
were in effect AND’ed together.
For example,
// +build linux windows
// +build amd64
and
// +build linux/amd64 windows/amd64
were equivalent.
In December 2011, [CL 5489100](https://golang.org/cl/5489100)
added the `cgo` and `nocgo` build tags.
It also generalized slash syntax to mean AND of arbitrary tags,
not just GOOS and GOARCH, as in `// +build nocgo/linux`.
In January 2012, [CL 5554079](https://golang.org/cl/5554079)
added support for custom build tags (such as `appengine`),
changed slash to comma, and introduced `!` for negation.
In March 2013, [CL 7794043](https://golang.org/cl/7794043) added the `go1.1` build tag,
enabling release-specific file selection.
The syntax changed to allow dots in tag names.
Although each of these steps makes sense in isolation,
we have arrived at a non-standard boolean expression syntax
capable of expressing ANDs of ORs of ANDs of potential NOTs of tags.
It is difficult for developers to remember the syntax.
For example, two of these three mean the same thing, but which two?
// +build linux,386
// +build linux 386
// +build linux
// +build 386
The simple form worked well in the original context,
but it has not evolved gracefully.
The current richness of expression would be better served
by a more familiar syntax.
Surveying the public Go ecosystem for `// +build` lines in March 2020
turned up a few illustrative apparent bugs that had so far eluded
detection. These bugs might have been avoided entirely
if developers been working with more familiar syntax.
- [github.com/streamsets/datacollector-edge](https://github.com/streamsets/datacollector-edge/issues/8)
// +build 386 windows,amd64 windows
Confused AND and OR: simplifies to “`386` OR `windows`”.\
Apparently intended `// +build 386,windows amd64,windows`.
- [github.com/zchee/xxhash3](https://github.com/zchee/xxhash3/issues/1)
// +build 386 !gccgo,amd64 !gccgo,amd64p32 !gccgo
Confused AND and OR: simplifies to “`386` OR NOT `gccgo`”.\
Apparently intended:
// +build 386 amd64 amd64p32
// +build !gccgo
- [github.com/gopherjs/vecty](https://github.com/gopherjs/vecty/issues/261)
// +build go1.12,wasm,js js
Intended meaning unclear; simplifies to just “`js`”.
- [gitlab.com/aquachain/aquachain](https://gitlab.com/aquachain/aquachain/-/issues/2)
// +build windows,solaris,nacl nacl solaris windows
Intended (but at least equivalent to) `// +build nacl solaris windows`.
- [github.com/katzenpost/core](https://github.com/katzenpost/core/issues/97)
// +build linux,!amd64
// +build linux,amd64,noasm
// +build !go1.9
Unsatisfiable (`!amd64` and `amd64` can’t both be true).\
Apparently intended `// +build linux,!amd64 linux,amd64,noasm !go1.9`.
Later, in June 2020, Alan Donovan wrote some 64-bit specific code that he annotated with:
//+build linux darwin
//+build amd64 arm64 mips64x ppc64x
For the file implementing the generic fallback, he needed the negation of that condition and wrote:
//+build !linux,!darwin
//+build !amd64,!arm64,!mips64x,!ppc64x
This is subtly wrong. He correctly negated each line, but repeated lines apply constraints independently,
meaning they are ANDed together. To negate the overall meaning, the two lines in the fallback need
to be ORed together, meaning they need to be a single line:
//+build !linux,!darwin !amd64,!arm64,!mips64x,!ppc64x
Alan has written a very good book about Go—he is certainly an experienced developer—and [still got this wrong](https://github.com/google/starlark-go/pull/280).
It's all clearly too subtle.
Getting ahead of ourselves just a little, if we used a standard boolean syntax,
then the first file would have used:
(linux || darwin) && (amd64 || arm64 || mips64x || ppc64x)
and the negation in the fallback would have been easy:
!((linux || darwin) && (amd64 || arm64 || mips64x || ppc64x))
### Placement
In addition to confusion about syntax, there is also confusion
about placement. The documentation (in `go doc go/build`) explains:
> “Constraints may appear in any kind of source file (not just Go),
> but they must appear near the top of the file, preceded only by blank lines
> and other line comments. These rules mean that in Go files a build
> constraint must appear before the package clause.
>
> To distinguish build constraints from package documentation, a series of
> build constraints must be followed by a blank line.”
Because the search for build constraints stops
at the first non-`//`, non-blank line (usually the Go `package` statement),
this is an ignored build constraint:
package main
// +build linux
The syntax even excludes
C-style `/* */` comments, so this is an ignored build constraint:
/*
Copyright ...
*/
// +build linux
package main
Furthermore, to avoid confusion with doc comments,
the search stops stops at the last blank line before the
non-`//`, non-blank line, so this is an ignored build constraint
(and a doc comment):
// +build linux
package main
This is also an ignored build constraint, in an assembly file:
// +build 386 amd64
#include "textflag.h"
Surveying the public Go ecosystem for `// +build` lines in March 2020
turned up
- 98 ignored build constraints after `/* */` comments,
usually copyright notices;
- 50 ignored build constraints in doc comments;
- and 11 ignored build constraints after the `package` declaration.
These are small numbers compared to the 110,000 unique files found
that contained build constraints,
but these are only the ones that slipped through, unnoticed,
into the latest public commits.
We should expect that there are many more such mistakes
corrected in earlier commits
or that lead to head-scratching debugging sessions
but avoid being committed.
## Design
The core idea of the design is
to replace the current `// +build` lines for build tag selection
with new `//go:build` lines that use more familiar boolean expressions.
For example, the old syntax
// +build linux
// +build 386
would be replaced by the new syntax
//go:build linux && 386
The design also admits `//go:build` lines in more locations
and rejects misplaced `//go:build` lines.
The key to the design is a smooth transition that avoids
breaking Go code.
The next three sections explain these three parts of the design in detail.
### Syntax
The new syntax is given by this grammar, using the [notation of the Go spec](https://golang.org/ref/spec#Notation):
BuildLine = "//go:build" Expr
Expr = OrExpr
OrExpr = AndExpr { "||" AndExpr }
AndExpr = UnaryExpr { "&&" UnaryExpr }
UnaryExpr = "!" UnaryExpr | "(" Expr ")" | tag
tag = tag_letter { tag_letter }
tag_letter = unicode_letter | unicode_digit | "_" | "."
That is, the syntax of build tags is unchanged from its current form,
but the combination of build tags is now done with
Go’s `||`, `&&`, and `!` operators and parentheses.
(Note that build tags are not always valid Go expressions,
even though they share the operators,
because the tags are not always valid identifiers.
For example: “`go1.1`”.)
It is an error for a file to have more than one `//go:build` line,
to eliminate confusion about whether multiple lines are
implicitly ANDed or ORed together.
### Placement
The current search for build constraints can be explained concisely,
but it has unexpected behaviors that are difficult to understand,
as discussed in the Background section.
It remains useful for both people and programs like the `go` command
not to need to read the entire file to find any build constraints,
so this design still ends the search for build constraints
at the first non-comment text in the file.
However, this design allows placing `//go:build` constraints
after `/* */` comments or in doc comments.
([Proposal issue 37974](https://golang.org/issue/37974),
which will ship in Go 1.15,
strips those `//go:` lines out of the doc comments.)
The new rule would be:
> “Constraints may appear in any kind of source file (not just Go),
> but they must appear near the top of the file, preceded only by blank lines
> and other `//` and `/* */` comments. These rules mean that in Go files a build
> constraint must appear before the package clause.”
In addition to this more relaxed rule,
the design would change `gofmt` to move misplaced
build constraints
to valid locations,
and it would change the Go compiler and assembler
reject misplaced build constraints.
This will correct most misplaced constraints automatically
and report the others; no misplaced constraint should go unnoticed.
(The next section describes the tool changes in more detail.)
### Transition
A smooth transition is critical for a successful rollout.
By the [Go release policy](https://golang.org/doc/devel/release.html#policy),
the release of Go 1.N
ends support for Go 1.(N-2), but most users will still want to be
able to write code that works with both Go 1.(N−1) and Go 1.N.
If Go 1.N introduces support for `//go:build` lines,
code that needs to build with the past two releases can’t fully adopt
the new syntax until Go 1.(N+1) is released.
Publishers of popular dependencies may not realize this,
which may lead to breakage in the Go ecosystem.
We must make accidental breakage unlikely.
To help with the transition, we envision a plan
carried out over three Go releases.
For concreteness, we call them Go 1.(N−1), Go 1.N, and Go 1.(N+1).
The bulk of the work happens in Go 1.N,
with minor preparation in Go 1.(N−1) and minor cleanup in Go 1.(N+1).
**Go 1.(N−1)** would prepare for the transition with minimal changes.
In Go 1.(N−1):
- Builds will fail when a Go source file
contains `//go:build` lines without `// +build` lines.
- Builds will _not_ otherwise look at `//go:build` lines
for file selection.
- Users will not be encouraged to use `//go:build` lines yet.
At this point:
- Packages will build in Go 1.(N−1) using the same files as in Go 1.(N-2), always.
- Go 1.(N−1) release notes will not mention `//go:build` at all.
**Go 1.N** would start the transition. In Go 1.N:
- Builds will start preferring `//go:build` lines for file selection.
If there is no `//go:build` in a file, then any `// +build` lines
still apply.
- Builds will no longer fail if a Go file contains `//go:build`
without `// +build`.
- Builds will fail if a Go or assembly file contains `//go:build` too late in the file.
- `Gofmt` will move misplaced `//go:build` and `// +build`
lines to their proper location in the file.
- `Gofmt` will format the expressions in `//go:build` lines
using the same rules as for other Go boolean expressions
(spaces around all `&&` and `||` operators).
- If a file contains only `// +build` lines,
`gofmt` will add an equivalent `//go:build` line above them.
- If a file contains both `//go:build` and `// +build` lines,
`gofmt` will consider the `//go:build` the source of truth
and update the `// +build` lines to match,
preserving compatibility with earlier versions of Go.
`Gofmt` will also reject `//go:build` lines that are deemed
too complex to convert into `// +build` format,
although this situation will be rare.
(Note the “If” at the start of this bullet.
`Gofmt` will _not_ add `// +build` lines to a file
that only has `//go:build`.)
- The `buildtags` check in `go vet` will add support for `//go:build` constraints.
It will fail when a Go source file contains
`//go:build` and `// +build` lines with different meanings.
If the check fails, one can run `gofmt` `-w`.
- The `buildtags` check will also fail when a Go source file contains
`//go:build` without `// +build` and its containing module
has a `go` line listing a version before Go 1.N.
If the check fails, one can add any `// +build` line and then run `gofmt` `-w`,
which will replace it with the correct ones.
Or one can bump the `go.mod` go version to Go 1.N.
- Release notes will explain `//go:build` and the transition.
At this point:
- Go 1.(N-2) is now unsupported, per the [Go release policy](https://golang.org/doc/devel/release.html#policy).
- Packages will build in Go 1.N using the same files as in Go 1.(N−1),
provided they pass Go 1.N `go vet`.
- Packages that contain conflicting `//go:build` and `// +build` lines
will fail Go 1.N `go vet`.
- Anyone using `gofmt` on save will not fail `go vet`.
- Packages that contain only `//go:build` lines will work fine when
using only Go 1.N.
If such packages are built using Go 1.(N−1), the build will fail, loud and clear.
**Go 1.(N+1)** would complete the transition. In Go 1.(N+1):
- A new fix in `go fix` will remove `// +build` stanzas,
making sure to leave behind equivalent `//go:build` lines.
The removal only happens when `go fix` is being run in a module
with a `go 1.N` (or later) line, which is taken as an explicit signal
that the developer no longer needs compatibility with Go 1.(N−1).
The removal is never done in GOPATH mode, which lacks
any such explicit signal.
- Release notes will explain that the transition is complete.
At this point:
- Go 1.(N−1) is now unsupported, per the Go release policy.
- Packages will build in Go 1.(N+1) using the same file as in Go 1.N, always.
- Running `go fix` will remove all `// +build` lines from the source tree,
leaving behind equivalent, easier-to-read `//go:build` lines,
but only on modules that have set a base requirement of Go 1.N.
## Rationale
The motivation is laid out in the Background section above.
The rationale for using Go syntax is that Go developers already
understand one boolean expression syntax.
It makes more sense to reuse that one
than to maintain a second one.
No other syntaxes were considered: any other syntax would be
a second (well, a third) syntax to learn.
As Michael Munday noted in 2018, these lines are difficult to test,
so we would be well served to make them as straightforward as possible
to understand.
This observation is reinforced by the examples in the introduction.
Tooling will need to keep supporting `// +build` lines indefinitely,
in order to continue to build old code.
Similarly, the `go vet` check that `//go:build` and `// +build` lines
match when both are present will be kept indefinitely.
But the `go fix` should at least help us remove `// +build` lines
from new versions of code, so that most developers stop seeing them
or needing to edit them.
The rationale for disallowing multiple `//go:build` lines
is that the entire goal of this design is to replace
implicit AND and OR with explicit `&&` and `||` operators.
Allowing multiple lines reintroduces an implicit operator.
The rationale for the transition is laid out in that section.
We don’t want to break Go developers unnecessarily,
nor to make it easy for dependency authors to break
their dependents.
The rationale for using `//go:build` as the new prefix is that
it matches our now-established convention of using `//go:`
prefixes for directives to the build system or compilers
(see also `//go:generate`, `//go:noinline`, and so on).
The rationale for introducing a new prefix (instead of reusing `// +build`)
is that the new prefix makes it possible to write files that
contain both syntaxes, enabling a smooth transition.
The main alternative to this design is to do nothing at all
and simply stick with the current syntax.
That is an attractive option, since it avoids going through
a transition.
On the other hand, the benefit of the clearer syntax will only grow as we get
more and more Go developers and more and more Go code,
and the transition should be fairly smooth and low-cost.
## Compatibility
Go code that builds today will keep building indefinitely,
without any changes.
The transition aims to make it difficult
to cause new incompatibilities accidentally,
with `gofmt` and `go vet` working to keep old and
new syntaxes in sync during the transition.
Compatibility is also the reason for not providing the `go` `fix`
that removes `// +build` lines until Go 1.(N+1) is released:
that way, the automated tool that breaks Go 1.(N−1) users
is not even available until Go 1.(N−1) is no longer supported.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/14951-soft-heap-limit.md | # Proposal: Separate soft and hard heap size goal
Author(s): Austin Clements
Inspired by discussion with Rick Hudson and Rhys Hiltner
Last updated: 2017-10-31
Discussion at https://golang.org/issue/14951.
## Background
The GC pacer is responsible for determining when to start a GC cycle
and how much back-pressure to put on allocation to prevent exceeding
the goal heap size.
It aims to balance two goals:
1. Complete marking before the allocated heap exceeds the GOGC-based
goal heap size.
2. Minimize GC CPU consumed beyond the 25% reservation.
In order to satisfy the first goal, the pacer forces the mutator to
assist with marking if it is allocating too quickly.
These mark assists are what cause GC CPU to exceed the 25%, since the
scheduler dedicates 25% to background marking without assists.
Hence, to satisfy the second goal, the pacer's trigger controller sets
the GC trigger heap size with the goal of starting GC early enough
that no assists are necessary.
In addition to reducing GC CPU overhead, minimizing assists also
reduces the per-goroutine latency variance caused by assists.
In practice, however, the trigger controller does not achieve the goal
of minimizing mark assists because it stabilizes on the wrong steady
state.
This document explains what happens and why and then proposes a
solution.
For a detailed description of the pacer, see the [pacer design
document](http://golang.org/s/go15gcpacing).
This document follows the nomenclature set out in the original design,
so it may be useful to review the original design document first.
## Problem
The trigger controller is a simple proportional feedback system based
on two measurements that directly parallel the pacer's two goals:
1. The *actual* heap growth *h<sub>a</sub>* at which marking
terminates, as a fraction of the heap goal size.
Specifically, it uses the overshoot ratio
*h* = (*h<sub>a</sub>* − *h<sub>T</sub>*)/(*h<sub>g</sub>*−*h<sub>T</sub>*),
which is how far between the trigger *h<sub>T</sub>* and the goal
*h<sub>g</sub>* the heap was at completion.
Ideally, the pacer would achieve *h* = 1.
2. The *actual* GC CPU consumed *u<sub>a</sub>* as a fraction of the
total CPU available.
Here, the goal is fixed at *u<sub>g</sub>* = 0.25.
Using these, the trigger controller computes the error in the trigger
and adjusts the trigger based on this error for the next GC cycle.
Specifically, the error term is
![](14951/error-term.png)
However, *e*(*n*) = 0 not only in the desired case of
*h* = 1, *u<sub>a</sub>* = *u<sub>g</sub>*, but in
any state where *h* = *u<sub>g</sub>*/*u<sub>a</sub>*.
As a result, the trigger controller can stabilize in a state that
undershoots the heap goal and overshoots the CPU goal.
We can see this in the following
[plot](https://gist.github.com/aclements/f7a770f9cb5682e038fe3f6ebd66bcba)
of *e*(*n*), which shows positive error in blue, negative error in
red, and zero error in white:
![](14951/error-plot.png)
Coupled with how GC paces assists, this is exactly what happens when
the heap size is stable.
To satisfy the heap growth constraint, assist pacing conservatively
assumes that the entire heap is live.
However, with a GOGC of 100, only *half* of the heap is live in steady
state.
As a result, marking terminates when the allocated heap is only half
way between the trigger and the goal, i.e., at *h* = 0.5
(more generally, at *h* = 100/(100+GOGC)).
This causes the trigger controller to stabilize at
*u<sub>a</sub>* = 0.5, or 50% GC CPU usage, rather than
*u<sub>a</sub>* = 0.25.
This chronic heap undershoot leads to chronic CPU overshoot.
### Example
The garbage benchmark demonstrates this problem nicely when run as
`garbage -benchmem 512 -benchtime 30s`.
Even once the benchmark has entered steady state, we can see a
significant amount of time spent in mark assists (the narrow cyan
regions on every other row):
![](14951/trace-assists.png)
Using `GODEBUG=gcpacertrace=1`, we can
[plot](https://gist.github.com/aclements/6701446d1ef39e42f3337f00a6f94973)
the exact evolution of the pacing parameters:
![](14951/evolution-bad.png)
The thick black line shows the balance of heap growth and GC CPU at
which the trigger error is 0.
The crosses show the actual values of these two at the end of each GC
cycle as the benchmark runs.
During warmup, the pacer is still adjusting to the rapidly changing
heap.
However, once the heap enters steady state, GC reliably finishes at
50% of the target heap growth, which causes the pacer to dutifully
stabilize on 50% GC CPU usage, rather than the desired 25%, just as
predicted above.
## Proposed solution
I propose separating the heap goal into a soft goal, *h<sub>g</sub>*,
and a hard goal, *h<sub>g</sub>'*, and setting the assist pacing such
the allocated heap size reaches the soft goal in *expected
steady-state* (no live heap growth), but does not exceed the hard goal
even in the worst case (the entire heap is reachable).
The trigger controller would use the soft goal to compute the trigger
error, so it would be stable in the steady state.
Currently the work estimate used to compute the assist ratio is simply
*W<sub>e</sub>* = *s*, where *s* is the bytes of scannable
heap (that is, the total allocated heap size excluding no-scan tails
of objects).
This worst-case estimate is what leads to over-assisting and
undershooting the heap goal in steady state.
Instead, between the trigger and the soft goal, I propose using an
adjusted work estimate
*W<sub>e</sub>* = *s*/(1+*h<sub>g</sub>*).
In the steady state, this would cause GC to complete when the
allocated heap was roughly the soft heap goal, which should cause the
trigger controller to stabilize on 25% CPU usage.
If allocation exceeds the soft goal, the pacer would switch to the
worst-case work estimate *W<sub>e</sub>* = *s* and aim for
the hard goal with the new work estimate.
This leaves the question of how to set the soft and hard goals.
I propose setting the soft goal the way we currently set the overall
heap goal: *h<sub>g</sub>* = GOGC/100, and setting the hard
goal to allow at most 5% extra heap growth:
*h<sub>g</sub>'* = 1.05*h<sub>g</sub>*.
The consequence of this is that we would reach the GOGC-based goal in
the steady state.
In a heap growth state, this would allow heap allocation to overshoot
the GOGC-based goal slightly, but this is acceptable (maybe even
desirable) during heap growth.
This also has the advantage of allowing GC to run less frequently by
targeting the heap goal better, thus consuming less total CPU for GC.
It will, however, generally increase heap sizes by more accurately
targeting the intended meaning of GOGC.
With this change, the pacer does a significantly better job of
achieving its goal on the garbage benchmark:
![](14951/evolution-good.png)
As before, the first few cycles have high variance from the goal
because the heap is growing rapidly, so the pacer cannot find a stable
point.
However, it then quickly converges near the optimal point of reaching
the soft heap goal at 25% GC CPU usage.
Interestingly, while most of the variance in the original design was
around GC CPU usage, that variance has been traded to the heap ratio
in this new design.
This is because the scheduler *does not allow* GC CPU usage to drop
below 25%.
Hence, the controller saturates and the inherent variance shifts to
the less constrained dimension.
To address this, I propose making one further change: dedicate only
20% of the CPU to background marking, with the expectation that 5%
will be used for mark assists in the steady state.
This keeps the controller out of saturation and gives it some "wiggle
room", while still minimizing time spent in mark assists The result is
very little variance from the goal in either dimension in the steady
state:
![](14951/evolution-best.png)
## Evaluation
To evaluate this change, we use the go1 and x/benchmarks suites.
All results are based on [CL 59970 (PS2)](http://golang.org/cl/59970)
and [CL 59971 (PS3)](http://golang.org/cl/59971).
Raw results from the go1 benchmarks can be viewed
[here](https://perf.golang.org/search?q%3Dupload:20171031.1) and the
x/benchmarks can be viewed
[here](https://perf.golang.org/search?q%3Dupload:20171031.2).
### Throughput
The go1 benchmarks show little effect in throughput, with a geomean
slowdown of 0.16% and little variance.
The x/benchmarks likewise show relatively little slowdown, except for
the garbage benchmark with a 64MB live heap, which slowed down by
4.27%.
This slowdown is almost entirely explained by additional time spent in
the write barrier, since the mark phase is now enabled longer.
It's likely this can be mitigated by optimizing the write barrier.
<!-- TODO: trace.cl59970.2.pre vs trace.cl59971.3 is quite good.
There's still something bringing down the per-goroutine minimum by
causing rare really long assists (with little debt), though the MMU is
still better and the MUT is much better. -->
<!-- TODO: Throughput, heap size, MMU, execution trace, effect of GOGC,
other benchmarks -->
## Alternatives and additional solutions
**Adjust error curve.** Rather than adjusting the heap goal and work
estimate, an alternate approach would be to adjust the zero error
curve to account for the expected steady-state heap growth.
For example, the modified error term
![](14951/error-term-mod.png)
results in zero error when
*h* = *u<sub>g</sub>*/(*u<sub>a</sub>*(1+*h<sub>g</sub>*)),
which crosses *u<sub>a</sub>* = *u<sub>g</sub>* at
*h* = 1/(1+*h<sub>g</sub>*), which is exactly the expected
heap growth in steady state.
This mirrors the adjusted heap goal approach, but does so by starting
GC earlier rather than allowing it to finish later.
This is a simpler change, but has some disadvantages.
It will cause GC to run more frequently rather than less, so it will
consume more total CPU.
It also interacts poorly with large GOGC by causing GC to finish so
early in the steady-state that it may largely defeat the purpose of
large GOGC.
Unlike with the proposed heap goal approach, there's no clear parallel
to the hard heap goal to address the problem with large GOGC in the
adjusted error curve approach.
**Bound assist ratio.** Significant latency issues from assists may
happen primarily when the assist ratio is high.
High assist ratios create a large gap between the performance of
allocation when assisting versus when not assisting.
However, the assist ratio can be estimated as soon as the trigger and
goal are set for the next GC cycle.
We could set the trigger earlier if this results in an assist ratio
high enough to have a significant impact on allocation performance.
**Accounting for floating garbage.** GOGC's effect is defined in terms
of the "live heap size," but concurrent garbage collectors never truly
know the live heap size because of *floating garbage*.
A major source of floating garbage in Go is allocations that happen
while GC is active, since all such allocations are retained by that
cycle.
These *pre-marked allocations* increase the runtime's estimate of the
live heap size (in a way that's dependent on the trigger, no less),
which in turn increases the GOGC-based goal, which leads to larger
heaps.
We could account for this effect by using the fraction of the heap
that is live as an estimate of how much of the pre-marked memory is
actually live.
This leads to the following estimate of the live heap:
![](14951/heap-est.png), where *m* is the bytes of marked heap and
*H<sub>T</sub>* and *H<sub>a</sub>* are the absolute trigger and
actual heap size at completion, respectively.
This estimate is based on the known *post-marked live heap* (marked
heap that was allocated before GC started),
![](14951/post-marked-live-heap.png).
From this we can estimate that the overall fraction of the heap that
is live is ![](14951/est-fraction.png).
This yields an estimate of how much of the pre-marked heap is live:
![](14951/est-pre-marked.png).
The live heap estimate is then simply the sum of the post-marked live
heap and the pre-marked live heap estimate.
**Use work credit as a signal.** Above, we suggested decreasing the
background mark worker CPU to 20% in order to avoid saturating the
trigger controller in the regime where there are no assists.
Alternatively, we could use work credit as a signal in this regime.
If GC terminates with a significant amount of remaining work credit,
that means marking significantly outpaced allocation, and the next GC
cycle can trigger later.
TODO: Think more about this.
How do we balance withdrawals versus the final balance?
How does this relate to the heap completion size?
What would the exact error formula be?
**Accounting for idle.** Currently, the trigger controller simply
ignores idle worker CPU usage when computing the trigger error because
changing the trigger won't directly affect idle CPU.
However, idle time marking does affect the heap completion ratio, and
because it contributes to the work credit, it also reduces assists.
As a result, the trigger becomes dependent on idle marking anyway,
which can lead to unstable trigger behavior: if the application has a
period of high idle time, GC will repeatedly finish early and the
trigger will be set very close to the goal.
If the application then switches to having low idle time, GC will
trigger too late and assists will be forced to make up for the work
that idle marking was previously performing.
Since idle time can be highly variable and unpredictable in real
applications, this leads to bad GC behavior.
To address this, the trigger controller could account for idle
utilization by scaling the heap completion ratio to estimate what it
would have been without help from idle marking.
This would be like assuming the next cycle won't have any idle time.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/generics-implementation-gcshape.md | # Generics implementation - GC Shape Stenciling
This document describes a method to implement the [Go generics proposal](https://go.googlesource.com/proposal/+/refs/heads/master/design/go2draft-type-parameters.md) by stenciling the code for each different *GC shape* of the instantiated types, and using a *dictionary* to handle differing behaviors of types that have the same shape.
A more detailed and up-to-date description of the actual implementation released
in Go 1.18 is given in [this document](https://github.com/golang/proposal/blob/master/design/generics-implementation-dictionaries-go1.18.md).
This proposal is middle ground between the [Generics Implementation - Stenciling](https://go.googlesource.com/proposal/+/refs/heads/master/design/generics-implementation-stenciling.md) and [Generics Implementation - Dictionaries](https://go.googlesource.com/proposal/+/refs/heads/master/design/generics-implementation-dictionaries.md) proposals.
The _GC shape_ of a type means how that type appears to the allocator / garbage collector. It is determined by its size, its required alignment, and which parts of the type contain a pointer.
When we generate code for a generic function, we will generate a single chunk of assembly for each unique GC shape used by any instantiation. Each chunk of assembly will take as an argument a _dictionary_, which is a set of information describing the particular concrete types that the parameterized types take on. It includes the concrete types themselves, of course, but also derived information as we will see below.
The most important feature of a dictionary is that it is compile-time computeable. All dictionaries will reside in the read-only data section, and will be passed around by reference. Anything they reference (types, other dictionaries, etc.) must also be in the read-only or code data sections.
A running example of a generic function:
```
func f [T1, T2 any](x int, y T1) T2 {
...
}
```
With a callsite:
```
f[int, float64](7, 3.5)
```
The implementation of f will have an additional argument which is the pointer to the dictionary structure. We could put the additional argument first or last, or in its own register. Reserving a register for it seems overkill. Putting it first is similar to how receivers are passed (speaking of which, would the receiver or the dictionary come first?). Putting it last means less argument shuffling in the case where wrappers are required (not sure where those might be yet).
The dictionary will contain a few additional fields beyond the instantiated types themselves, depending on what the implementation of f needs. Note that we must look inside the implementation of f to determine what is required. This means that the compiler will need to summarize what fields are necessary in the dictionary of the function, so that callers can compute that information and put it in the dictionary when it knows what the instantiated types are. (Note this implies that we can’t instantiate a generic function knowing only its signature - we need its implementation at compile time also. So implemented-in-assembly and cross-shared-object-boundary instantiations are not possible.)
The dictionary will contain the following items:
## Instantiated types
The first thing that the dictionary will contain is a reference to the `runtime._type` for each parameterized type.
```
type dictionary struct {
T1 *runtime._type
T2 *runtime._type
...
}
```
We should probably include these values unconditionally, even if the implementation doesn’t need them (for printing in tracebacks, for example).
## Derived types
The code in f may declare new types which are derived from the generic parameter types. For instance:
```
type X struct { x int; y T1 }
m := map[string]T1{}
```
The dictionary needs to contain a `*runtime._type` for each of the types mentioned in the body of f which are derived from the generic parameter types.
```
type dictionary struct {
...
D1 *runtime._type // struct { x int; y T1 }
D2 *runtime._type // map[string]T1
...
}
```
How will the caller know what derived types the body of f needs? This is a very important question, and will be discussed at length later (see the proto-dictionary section). For now, just assume that there will be summary information for each function which lets the callsite know what derived types are needed.
## Subdictionaries
If f calls other functions, it needs a dictionary for those calls. For example,
```
func g[T](g T) { ... }
```
Then in f,
```
g[T1](y)
```
The call to g needs a dictionary. At the callsite to g from f, f has no way to know what dictionary it should use, because the type parameterizing the instantiation of g is a generic type. So the caller of f must provide that dictionary.
```
type dictionary struct {
...
S1 *dictionary // SubDictionary for call to g
S2 *dictionary // SubDictionary for some other call
...
}
```
## Helper methods
The dictionary should contain methods that operate on the generic types. For instance, if f has the code:
```
y2 := y + 1
if y2 > y { … }
```
(assuming here that `T1` has a type list that allows `+` and `>`), then the dictionary must contain methods that implement `+` and `>`.
```
type dictionary struct {
...
plus func(z, x, y *T1) // does *z = *x+*y
greater func(x, y *T1) bool // computes *x>*y
...
}
```
There’s some choice available here as to what methods to include. For `new(T1)` we could include in the dictionary a method that returns a `*T1`, or we could call `runtime.newobject` directly with the `T1` field of the dictionary. Similarly for many other tasks (`+`, `>`, ...), we could use runtime helpers instead of dictionary methods, passing the appropriate `*runtime._type` arguments so the runtime could switch on the type and do the appropriate computation.
## Stack layout
For this proposal (unlike the pure dictionaries proposal), nothing special for stack layout is required. Because we are stenciling for each GC shape, the layout of the stack frame, including where all the pointers are, is determined. Stack frames are constant sized, argument and locals pointer maps are computable at compile time, outargs offsets are constant, etc.
## End of Dictionary
```
type dictionary struct {
...
// That's it.
}
```
## The Proto-Dictionary
Callers of `f` require a bunch of information about `f` so that they can assemble an appropriate dictionary. We’ll call this information a proto-dictionary. Each entry in the proto-dictionary is conceptually a function from the concrete types used to instantiate the generic function, to the contents of the dictionary entry. At each callsite at compile time, the proto-dictionary is evaluated with the concrete type parameters to produce a real dictionary. (Or, if the callsite uses some generic types as type arguments, partially evaluate the proto-dictionary to produce a new proto-dictionary that represents some sub-dictionary of a higher-level dictionary.) There are two main features of the proto-dictionary. The first is that the functions described above must be computable at compile time. The second is that the proto-dictionary must be serializable, as we need to write it to an object file and read it back from an object file (for cases where the call to the generic function is in a different package than the generic function being called).
The proto-dictionary includes information for all the sections listed above:
* Derived types. Each derived type is a “skeleton” type with slots to put some of `f`’s type parameters.
* Any sub-proto-dictionaries for callsites in `f`. (Note: any callsites in `f` which use only concrete type parameters do not need to be in the dictionary of `f`, because they can be generated at that callsite. Only callsites in `f` which use one or more of `f`’s type parameters need to be a subdictionary of `f`’s dictionary.)
* Helper methods, if needed, for all types+operations that need them.
## Closures
Suppose f creates a closure?
```
func f[T any](x interface{}, y T) {
c := func() {
x = y
}
c()
}
```
We need to pass a dictionary to the anonymous function as part of the closure, so it knows how to do things like assign a value of generic type to an `interface{}`. When building the dictionary for `f`, one of the subdictionaries needs to be the dictionary required for the anonymous function, which `f` can then use as part of constructing the closure.
## Generic Types
This document has so far just considered generic functions. But we also need to handle generic types. These should be straightforward to stencil just like we do for derived types within functions.
## Generating instantiations
We need to generate at least one instantiation of each generic function for each instantiation GC shape.
Note that the number of instantiations could be exponential in the number of type parameters. Hopefully there won't be too many type parameters for a single function.
For functions that have a type list constraining each one of their type parameters, we can generate all possible instantiations using the types in the type list. (Because type lists operate on underlying types, we couldn't do this with a fully stenciled implementation. But types with the same underlying type must have the same GC shape, so that's not a problem in this proposal.) This instantiation can happen at the point of declaration. (If there are too many elements in the type list, or too many in the cross product of all the type lists of all the type parameters, we could decide to use the callsite-instantiation scheme instead.)
Otherwise, instantiating at the point of declaration is not possible. We then instead instantiate at each callsite. This can lead to duplicated work, as the same instantiation may be generated at multiple call sites. Within a compilation unit, we can avoid recomputing the same instantiation more than once. Across compilation units, however, it is more difficult. For starters we might allow multiple instantiations to be generated and then deduped by the linker. A more aggressive scheme would allow the `go build` tool to record which instantiations have already been generated and pass that list to the compiler so it wouldn't have to do duplicate work. It can be tricky to make building deterministic under such a scheme (which is probably required to make the build cache work properly).
TODO: generating instantiations when some type parameters are themselves generic.
## Naming instantiations
To get easy linker deduplication, we should name instantiations using some encoding of their GC shape. We could add a size and alignment to a function name easily enough. Adding ptr/nonptr bits is a bit trickier because such an encoding could become large.
## Deduplication
Code for the instantiation of a specific generic function with a particular GC shape of its type parameters should be deduplicated by the linker. This deduplication will be done by name.
We should name dictionaries appropriately, so deduplication of dictionaries happens automatically. For instance, two different packages instantiating `f` using the same concrete types should use the same dictionary in the final binary. Deduplication should work fine using just names as is done currently in the compiler for, e.g., `runtime._type` structures.
Then the worst case space usage is one dictionary per instantiation. Note that some subdictionaries might be equivalent to a top-level dictionary for that same function.
## Other Issues
Recursion - can a dictionary ever reference itself? How do we build it, and the corresponding proto-dictionaries, then? I haven’t wrapped my head around the cases in which this could come up.
Dictionary layout. Because the dictionary is completely compile time and read only, it does not need to adhere to any particular structure. It’s just an array of bytes. The compiler assigns meanings to the fields as needed, including any ordering or packing. We would, of course, keep some sort of ordering just for our sanity.
We probably need a way for the runtime to get access to the dictionary itself, which could be done by always making it the first argument, and storing it in a known place. The runtime could use the dictionary to disambiguate the type parameters in stack tracebacks, for example.
## Risks
As this is a hybrid of the Stenciling and Dictionaries methods, it has a mix of benefits and drawbacks of both.
* How much do we save in code size relative to fully stenciled? How much, if at all, are we still worse than the dictionary approach?
* How much slower would the GC shape stenciling be, than just stenciling everything? The assembly code will in most cases be the same in the GC stenciled and fully stenciled implementations, as all the code for manipulating items of generic type are straightforward once we know the GC shape. The one exception is that method calls won't be fully resolvable at compile time. That could be problematic is escape analysis - any methods called on the generic type will need to be analyzed conservatively which could lead to more heap allocation than a fully stenciled implementation. Similarly, inlining won't happen in situations where it could happen with a fully stenciled implementation.
## TODO
Register calling convention. In ABI0, argument passing is completely determined by GC shape. In the register convention, though, it isn't quite. For instance, `struct {x, y int}` and `[2]int` get allocated to registers differently. That makes figuring out where function inputs appear, and where callout arguments should go, dependent on the instantiated types. We could fix this by either including additional type info into the GC shape, or modifying the calling convention to handle arrays just like structs. I'm leaning towards the former. We might need to distinguish arrays anyway to ensure succinct names for the instantiations.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/24301-versioned-go.md | # Proposal: Versioned Go Modules
Author: Russ Cox\
Last Updated: March 20, 2018\
Discussion: https://golang.org/issue/24301
## Abstract
We propose to add awareness of package versions to the Go toolchain, especially the `go` command.
## Background
The first half of the blog post [Go += Package Versioning](https://research.swtch.com/vgo-intro) presents detailed background for this change.
In short, it is long past time to add versions to the working vocabulary of both Go developers and our tools,
and this proposal describes a way to do that.
[Semantic versioning](https://semver.org) is the name given to an established convention for assigning version numbers
to projects.
In its simplest form, a version number is MAJOR.MINOR.PATCH, where MAJOR, MINOR, and PATCH
are decimal numbers.
The syntax used in this proposal follows the widespread convention of
adding a “v” prefix: vMAJOR.MINOR.PATCH.
Incrementing MAJOR indicates an expected breaking change.
Otherwise, a later version is expected to be backwards compatible
with earlier versions within the same MAJOR version sequence.
Incrementing MINOR indicates a significant change or new features.
Incrementing PATCH is meant to be reserved for very small, very safe changes,
such as small bug fixes or critical security patches.
The sequence of [vgo-related blog posts](https://research.swtch.com/vgo) presents more detail
about the proposal.
## Proposal
I propose to add versioning to Go using the following approach.
1. Introduce the concept of a _Go module_, which is a group of
packages that share a common prefix, the _module path_, and are versioned together as a single unit.
Most projects will adopt a workflow in which a version-control repository
corresponds exactly to a single module.
Larger projects may wish to adopt a workflow in which a
version-control repository can hold multiple modules.
Both workflows will be supported.
2. Assign version numbers to modules by tagging specific commits
with [semantic versions](https://semver.org) such as `v1.2.0`.
(See
the [Defining Go Modules](https://research.swtch.com/vgo-module) post
for details, including how to tag multi-module repositories.)
3. Adopt [semantic import versioning](https://research.swtch.com/vgo-import),
in which each major version has a distinct import path.
Specifically, an import path contains a module path, a version number,
and the the path to a specific package inside the module.
If the major version is v0 or v1, then the version number element
must be omitted; otherwise it must be included.
<p style="text-align:center">
<img width=343 height=167 src="24301/impver.png" srcset="24301/impver.png 1x, 24301/impver@1.5x.png 1.5x, 24301/impver@2x.png 2x, 24301/impver@3x.png 3x, 24301/impver@4x.png 4x">
</p>
The packages imported as `my/thing/sub/pkg`, `my/thing/v2/sub/pkg`, and `my/thing/v3/sub/pkg`
come from major versions v1, v2, and v3 of the module `my/thing`,
but the build treats them simply as three different packages.
A program that imports all three will have all three linked into the final binary,
just as if they were `my/red/pkg`, `my/green/pkg`, and `my/blue/pkg`
or any other set of three different import paths.
Note that only the major version appears in the import path: `my/thing/v1.2/sub/pkg` is not allowed.
4. Explicitly adopt the “import compatibility rule”:
> _If an old package and a new package have the same import path,_\
> _the new package must be backwards compatible with the old package._
The Go project has encouraged this convention from the start
of the project, but this proposal gives it more teeth:
upgrades by package users will succeed or fail
only to the extent that package authors follow the import
compatibility rule.
The import compatibility rule only applies to tagged
releases starting at v1.0.0.
Prerelease (vX.Y.Z-anything) and v0.Y.Z versions
need not follow compatibility with earlier versions,
nor do they impose requirements on future versions.
In contrast, tagging a commit vX.Y.Z for X ≥ 1 explicitly
indicates “users can expect this module to be stable.”
In general, users should expect a module to follow
the [Go 1 compatibility rules](https://golang.org/doc/go1compat#expectations)
once it reaches v1.0.0,
unless the module's documentation clearly states exceptions.
5. Record each module's path and dependency requirements in a
[`go.mod` file](XXX) stored in the root of the module's file tree.
6. To decide which module versions to use in a given build,
apply [minimal version selection](https://research.swtch.com/vgo-mvs):
gather the transitive closure of all the listed requirements
and then remove duplicates of a given major version of a module
by keeping the maximum requested version,
which is also the minimum version satisfying all listed requirements.
Minimal version selection has two critical properties.
First, it is trivial to implement and understand.
Second, it never chooses a module version not listed in some `go.mod` file
involved in the build: new versions are not incorporated
simply because they have been published.
The second property produces [high-fidelity builds](XXX)
and makes sure that upgrades only happen when
developers request them, never unexpectedly.
7. Define a specific zip file structure as the
“interchange format” for Go modules.
The vast majority of developers will work directly with
version control and never think much about these zip files,
if at all, but having a single representation
enables proxies, simplifies analysis sites like godoc.org
or continuous integration, and likely enables more
interesting tooling not yet envisioned.
8. Define a URL schema for fetching Go modules from proxies,
used both for installing modules using custom domain names
and also when the `$GOPROXY` environment variable is set.
The latter allows companies and individuals to send all
module download requests through a proxy for security,
availability, or other reasons.
9. Allow running the `go` command in file trees outside GOPATH,
provided there is a `go.mod` in the current directory or a
parent directory.
That `go.mod` file defines the mapping from file system to import path
as well as the specific module versions used in the build.
See the [Versioned Go Commands](https://research.swtch.com/vgo-cmd) post for details.
10. Disallow use of `vendor` directories, except in one limited use:
a `vendor` directory at the top of the file tree of the top-level module
being built is still applied to the build,
to continue to allow self-contained application repositories.
(Ignoring other `vendor` directories ensures that
Go returns to builds in which each import path has the same
meaning throughout the build
and establishes that only one copy of a package with a given import
path is used in a given build.)
The “[Tour of Versioned Go](https://research.swtch.com/vgo-tour)”
blog post demonstrates how most of this fits together to create a smooth user experience.
## Rationale
Go has struggled with how to incorporate package versions since `goinstall`,
the predecessor to `go get`, was released eight years ago.
This proposal is the result of eight years of experience with `goinstall` and `go get`,
careful examination of how other languages approach the versioning problem,
and lessons learned from Dep, the experimental Go package management tool released in January 2017.
A few people have asked why we should add the concept of versions to our tools at all.
Packages do have versions, whether the tools understand them or not.
Adding explicit support for versions
lets tools and developers communicate more clearly when
specifying a program to be built, run, or analyzed.
At the start of the process that led to this proposal, almost two years ago,
we all believed the answer would be to follow the package versioning approach
exemplified by Ruby's Bundler and then Rust's Cargo:
tagged semantic versions,
a hand-edited dependency constraint file known as a manifest,
a machine-generated transitive dependency description known as a lock file,
a version solver to compute a lock file satisfying the manifest,
and repositories as the unit of versioning.
Dep, the community effort led by Sam Boyer, follows this plan almost exactly
and was originally intended to serve as the model for `go` command
integration.
Dep has been a significant help for Go developers
and a positive step for the Go ecosystem.
Early on, we talked about Dep simply becoming `go dep`,
serving as the prototype of `go` command integration.
However, the more I examined the details of the Bundler/Cargo/Dep
approach and what they would mean for Go, especially built into the `go` command,
a few of the details seemed less and less a good fit.
This proposal adjusts those details in the hope of
shipping a system that is easier for developers to understand
and to use.
### Semantic versions, constraints, and solvers
Semantic versions are a reasonable convention for
specifying software versions,
and version control tags written as semantic versions
have a clear meaning,
but the [semver spec](https://semver.org/) critically does not
prescribe how to build a system using them.
What tools should do with the version information?
Dave Cheney's 2015 [proposal to adopt semantic versioning](https://golang.org/issue/12302)
was eventually closed exactly because, even though everyone
agreed semantic versions seemed like a good idea,
we didn't know the answer to the question of what to do with them.
The Bundler/Cargo/Dep approach is one answer.
Allow authors to specify arbitrary constraints on their dependencies.
Build a given target by collecting all its dependencies
recursively and finding a configuration satisfying all those
constraints.
Unfortunately, the arbitrary constraints make finding a
satisfying configuration very difficult.
There may be many satisfying configurations, with no clear way to choose just one.
For example, if the only two ways to build A are by using B 1 and C 2
or by using B 2 and C 1, which should be preferred, and how should developers remember?
Or there may be no satisfying configuration.
Also, it can be very difficult to tell whether there are many, one, or no
satisfying configurations:
allowing arbitrary constraints makes
version solving problem an NP-complete problem,
[equivalent to solving SAT](https://research.swtch.com/version-sat).
In fact, most package managers now rely on SAT solvers
to decide which packages to install.
But the general problem remains:
there may be many equally good configurations,
with no clear way to choose between them,
there may be a single best configuration,
or there may be no good configurations,
and it can be very expensive to determine
which is the case in a given build.
This proposal's approach is a new answer, in which authors can specify
only limited constraints on dependencies: only the minimum required versions.
Like in Bundler/Cargo/Dep, this proposal builds a given target by
collecting all dependencies recursively and then finding
a configuration satisfying all constraints.
However, unlike in Bundler/Cargo/Dep, the process of finding a
satisfying configuration is trivial.
As explained in the [minimal version selection](https://research.swtch.com/vgo-mvs) post,
a satisfying configuration always exists,
and the set of satisfying configurations forms a lattice with
a unique minimum.
That unique minimum is the configuration that uses exactly the
specified version of each module, resolving multiple constraints
for a given module by selecting the maximum constraint,
or equivalently the minimum version that satisfies all constraints.
That configuration is trivial to compute and easy for developers
to understand and predict.
### Build Control
A module's dependencies must clearly be given some control over that module's build.
For example, if A uses dependency B, which uses a feature of dependency C introduced in C 1.5,
B must be able to ensure that A's build uses C 1.5 or later.
At the same time, for builds to remain predictable and understandable,
a build system cannot give dependencies arbitrary, fine-grained control
over the top-level build.
That leads to conflicts and surprises.
For example, suppose B declares that it requires an even version of D, while C declares that it requires a prime version of D.
D is frequently updated and is up to D 1.99.
Using B or C in isolation, it's always possible to use a relatively recent version of D (D 1.98 or D 1.97, respectively).
But when A uses both B and C,
a SAT solver-based build silently selects the much older (and buggier) D 1.2 instead.
To the extent that SAT solver-based build systems actually work,
it is because dependencies don't choose to exercise this level of control.
But then why allow them that control in the first place?
Although the hypothetical about prime and even versions is clearly unlikely,
real problems do arise.
For example, issue [kubernetes/client-go#325](https://github.com/kubernetes/client-go/issues/325) was filed in November 2017,
complaining that the Kubernetes Go client pinned builds to a specific version of `gopkg.in/yaml.v2` from
September 2015, two years earlier.
When a developer tried to use
a new feature of that YAML library in a program that already
used the Kubernetes Go client,
even after attempting to upgrade to the latest possible version,
code using the new feature failed to compile,
because “latest” had been constrained by the Kubernetes requirement.
In this case, the use of a two-year-old YAML library version may be entirely reasonable within the context of the Kubernetes code base,
and clearly the Kubernetes authors should have complete
control over their own builds,
but that level of control does not make sense to extend to other developers' builds.
The issue was closed after a change in February 2018
to update the specific YAML version pinned to one from July 2017.
But the issue is not really “fixed”:
Kubernetes still pins a specific, increasingly old version of the YAML library.
The fundamental problem is that the build system
allows the Kubernetes Go client to do this at all,
at least when used as a dependency in a larger build.
This proposal aims to balance
allowing dependencies enough control to ensure a successful
build with not allowing them so much control that they break the build.
Minimum requirements combine without conflict,
so it is feasible (even easy) to gather them from all dependencies,
and they make it impossible to pin older versions,
as Kubernetes does.
Minimal version selection gives
the top-level module in the build additional control,
allowing it to exclude specific module versions
or replace others with different code,
but those exclusions and replacements only apply
when found in the top-level module, not when the module
is a dependency in a larger build.
A module author is therefore in complete control of
that module's build when it is the main program being built,
but not in complete control of other users' builds that depend on the module.
I believe this distinction will make this proposal
scale to much larger, more distributed code bases than
the Bundler/Cargo/Dep approach.
### Ecosystem Fragmentation
Allowing all modules involved in a build to impose arbitrary
constraints on the surrounding build harms not just that build
but the entire language ecosystem.
If the author of popular package P finds that
dependency D 1.5 has introduced a change that
makes P no longer work,
other systems encourage the author of P to issue
a new version that explicitly declares it needs D < 1.5.
Suppose also that popular package Q is eager to take
advantage of a new feature in D 1.5
and issues a new version that explicitly declares it needs D ≥ 1.6.
Now the ecosystem is divided, and programs must choose sides:
are they P-using or Q-using? They cannot be both.
In contrast, being allowed to specify only a minimum required version
for a dependency makes clear that P's author must either
(1) release a new, fixed version of P;
(2) contact D's author to issue a fixed D 1.6 and then release a new P declaring a requirement on D 1.6 or later;
or else (3) start using a fork of D 1.4 with a different import path.
Note the difference between a new P that requires “D before 1.5”
compared to “D 1.6 or later.”
Both avoid D 1.5, but “D before 1.5” explains only which builds fail,
while “D 1.6 or later” explains how to make a build succeed.
### Semantic Import Versions
The example of ecosystem fragmentation in the previous section
is worse when it involves major versions.
Suppose the author of popular package P has used D 1.X as a dependency,
and then popular package Q decides to update to D 2.X because it
is a nicer API.
If we adopt Dep's semantics,
now the ecosystem is again divided, and programs must again choose sides:
are they P-using (D 1.X-using) or Q-using (D 2.X-using)?
They cannot be both.
Worse,
in this case, because D 1.X and D 2.X are different major versions
with different APIs, it is completely reasonable for the author of P
to continue to use D 1.X, which might even continue to be updated with
features and bug fixes.
That continued usage only prolongs the divide.
The end result is that
a widely-used package like D would in practice either
be practically prohibited from issue version 2 or
else split the ecosystem in half by doing so.
Neither outcome is desirable.
Rust's Cargo makes a different choice from Dep.
Cargo allows each package to specify whether
a reference to D means D 1.X or D 2.X.
Then, if needed, Cargo links both a D 1.X and a D 2.X into the final binary.
This approach works better than Dep's,
but users can still get stuck.
If P exposes D 1.X in its own API and Q exposes D 2.X in its own API,
then a single client package C cannot use both P and Q,
because it will not be able to refer to both D 1.X (when using P)
and D 2.X (when using Q).
The [dependency story](https://research.swtch.com/vgo-import) in the semantic import versioning post
presents an equivalent scenario in more detail.
In that story, the base package manager starts out being like Dep,
and the `-fmultiverse` flag makes it more like Cargo.
If Cargo is one step away from Dep, semantic import versioning is two steps away.
In addition to allowing different major versions to be used
in a single build,
semantic import versioning gives the different major versions different names,
so that there's never any ambiguity
about which is meant in a given program file.
Making the import paths precise about the expected
semantics of the thing being imported (is it v1 or v2?)
eliminates the possibility of problems like those client C experienced
in the previous example.
More generally, in semantic import versioning,
an import of `my/thing` asks for the semantics of v1.X of `my/thing`.
As long as `my/thing` is following the import compatibility rule,
that's a well-defined set of functionality,
satisfied by the latest v1.X and possibly earlier ones
(as constrained by `go.mod`).
Similarly, an import of `my/thing/v2` asks for the semantics of v2.X of `my/thing`,
satisfied by the latest v2.X and possibly earlier ones
(again constrained by `go.mod`).
The meaning of the imports is clear, to both people and tools,
from reading only the Go source code,
without reference to `go.mod`.
If instead we followed the Cargo approach, both imports would be `my/thing`, and the
meaning of that import would be ambiguous from the source code alone,
resolved only by reading `go.mod`.
Our article “[About the go command](https://golang.org/doc/articles/go_command.html)” explains:
> An explicit goal for Go from the beginning was to be able to build Go code
> using only the information found in the source itself, not needing to write
> a makefile or one of the many modern replacements for makefiles.
> If Go needed a configuration file to explain how to build your program,
> then Go would have failed.
It is an explicit goal of this proposal's design to preserve this property,
to avoid making the general semantics of a Go source file change depending on
the contents of `go.mod`.
With semantic import versioning, if `go.mod` is deleted and
recreated from scratch, the effect is only to possibly update
to newer versions of imported packages, but still ones that are
still expected to work, thanks to import compatibility.
In contrast, if we take the Cargo approach, in which the `go.mod` file
must disambiguate between the arbitrarily different semantics of
v1 and v2 of `my/thing`, then `go.mod` becomes a required configuration file,
violating the original goal.
More generally, the main objection to adding `/v2/` to import paths is that
it's a bit longer, a bit ugly, and it makes explicit a semantically important
detail that other systems abstract away, which in turn induces more work for authors,
compared to other systems, when they change that detail.
But all of these were true when we introduced `goinstall`'s URL-like import paths,
and they've been a clear success.
Before `goinstall`, programmers wrote things like `import "igo/set"`.
To make that import work, you had to know to first check out `github.com/jacobsa/igo` into `$GOPATH/src/igo`.
The abbreviated paths had the benefit that if you preferred
a different version of `igo`, you could check your variant into
`$GOPATH/src/igo` instead, without updating any imports.
But the abbreviated imports also had the very real drawbacks that a build trying to use
both `igo/set` variants could not, and also that the Go source code did not record
anywhere exactly which `igo/set` it meant.
When `goinstall` introduced `import "github.com/jacobsa/igo/set"` instead,
that made the imports a bit longer and a bit ugly,
but it also made explicit a semantically important detail:
exactly which `igo/set` was meant.
The longer paths created a little more work for authors compared
to systems that stashed that information in a single configuration file.
But eight years later, no one notices the longer import paths,
we've stopped seeing them as ugly,
and we now rely on the benefits of being explicit about
exactly which package is meant by a given import.
I expect that once `/v2/` elements in import paths are
common in Go source files, the same will happen:
we will no longer notice the longer paths,
we will stop seeing them as ugly, and we will rely on the benefits of
being explicit about exactly which semantics are meant by a given import.
### Update Timing & High-Fidelity Builds
In the Bundler/Cargo/Dep approach, the package manager always prefers
to use the latest version of any dependency.
These systems use the lock file to override that behavior,
holding the updates back.
But lock files only apply to whole-program builds,
not to newly imported libraries.
If you are working on module A, and you add a new requirement on module B, which in turn requires module C,
these systems will fetch the latest of B and then also the latest of C.
In contrast, this proposal still fetches the latest of B (because it is
what you are adding to the project explicitly, and the default is to
take the latest of explicit additions) but then prefers to use the
exact version of C that B requires.
Although newer versions of C should work, it is safest to
use the one that B did.
Of course, if the build has a different reason to use a newer version of C, it can do that.
For example, if A also imports D, which requires a newer C, then the build should and will use that newer version.
But in the absence of such an overriding requirement,
minimal version selection will build A using the exact version of C requested by B.
If, later, a new version of B is released requesting a newer version of C,
then when A updates to that newer B,
C will be updated only to the version that the new B requires, not farther.
The [minimal version selection](https://research.swtch.com/vgo-mvs) blog post
refers to this kind of build as a “high-fidelity build.”
Minimal version selection has the key property that a recently-published version of C
is never used automatically.
It is only used when a developer asks for it explicitly.
For example, the developer of A could ask for all dependencies, including transitive dependencies, to be updated.
Or, less directly, the developer of B could update C and release a new B,
and then the developer of A could update B.
But either way, some developer working on some package in the build must
take an explicit action asking for C to be updated,
and then the update does not take effect in A's build until
a developer working on A updates some dependency leading to C.
Waiting until an update is requested ensures that updates only happen
when developers are ready to test them and deal with the possibility
of breakage.
Many developers recoil at the idea that adding the latest B would not
automatically also add the latest C,
but if C was just released, there's no guarantee it works in this build.
The more conservative position is to avoid using it until the user asks.
For comparison, the Go 1.9 go command does not automatically start using Go 1.10
the day Go 1.10 is released.
Instead, users are expected to update on their own
schedule,
so that they can control when they take on the risk of things breaking.
The reasons not to update automatically to the latest Go release
applies even more to individual packages:
there are more of them,
and most are not tested for backwards compatibility
as extensively as Go releases are.
If a developer does want to update all dependencies to the latest version,
that's easy: `go get -u`.
We may also add a `go get -p` that updates all dependencies to their
latest patch versions, so that C 1.2.3 might be updated to C 1.2.5 but not to C 1.3.0.
If the Go community as a whole reserved patch versions only for very safe
or security-critical changes, then that `-p` behavior might be useful.
## Compatibility
The work in this proposal is not constrained by
the [compatibility guidelines](https://golang.org/doc/go1compat) at all.
Those guidelines apply to the language and standard library APIs, not tooling.
Even so, compatibility more generally is a critical concern.
It would be a serious mistake to deploy changes to the `go` command
in a way that breaks all existing Go code or splits the ecosystem into
module-aware and non-module-aware packages.
On the contrary, we must make the transition as smooth and seamless as possible.
Module-aware builds can import non-module-aware packages
(those outside a tree with a `go.mod` file)
provided they are tagged with a v0 or v1 semantic version.
They can also refer to any specific commit using a “pseudo-version”
of the form v0.0.0-*yyyymmddhhmmss*-*commit*.
The pseudo-version form allows referring to untagged commits
as well as commits that are tagged with semantic versions at v2 or above
but that do not follow the semantic import versioning convention.
Module-aware builds can also consume requirement information
not just from `go.mod` files but also from all known pre-existing
version metadata files in the Go ecosystem:
`GLOCKFILE`, `Godeps/Godeps.json`, `Gopkg.lock`, `dependencies.tsv`,
`glide.lock`, `vendor.conf`, `vendor.yml`, `vendor/manifest`,
and `vendor/vendor.json`.
Existing tools like `dep` should have no trouble consuming
Go modules, simply ignoring the `go.mod` file.
It may also be helpful to add support to `dep` to read `go.mod` files in
dependencies, so that `dep` users are unaffected as their
dependencies move from `dep` to the new module support.
## Implementation
A prototype of the proposal is implemented in a fork of the `go` command called `vgo`,
available using `go get -u golang.org/x/vgo`.
We will refine this implementation during the Go 1.11 cycle and
merge it back into `cmd/go` in the main repository.
The plan, subject to proposal approval,
is to release module support in Go 1.11
as an optional feature that may still change.
The Go 1.11 release will give users a chance to use modules “for real”
and provide critical feedback.
Even though the details may change, future releases will
be able to consume Go 1.11-compatible source trees.
For example, Go 1.12 will understand how to consume
the Go 1.11 `go.mod` file syntax, even if by then the
file syntax or even the file name has changed.
In a later release (say, Go 1.12), we will declare the module support completed.
In a later release (say, Go 1.13), we will end support for `go` `get` of non-modules.
Support for working in GOPATH will continue indefinitely.
## Open issues (if applicable)
We have not yet converted large, complex repositories to use modules.
We intend to work with the Kubernetes team and others (perhaps CoreOS, Docker)
to convert their use cases.
It is possible those conversions will turn up reasons for adjustments
to the proposal as described here.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/56986-godebug.md | # Proposal: Extended backwards compatibility for Go
Russ Cox \
December 2022
Earlier discussion at https://go.dev/issue/55090.
Proposal at https://go.dev/issue/56986.
## Abstract
Go's emphasis on backwards compatibility is one of its key strengths.
There are, however, times when we cannot maintain strict compatibility,
such as when changing sort algorithms or fixing clear bugs,
when existing code depends on the old algorithm or the buggy behavior.
This proposal aims to address many such situations by keeping older Go programs
executing the same way even when built with newer Go distributions.
## Background
This proposal is about backward compatibility, meaning
**new versions of Go compiling older Go code**.
Old versions of Go compiling newer Go code is a separate problem,
with a different solution.
There is not a proposal yet.
For now, see
[the discussion about forward compatibility](https://github.com/golang/go/discussions/55092).
Go 1 introduced Go's [compatibility promise](https://go.dev/doc/go1compat),
which says that old programs will by and large continue to run correctly in new versions of Go.
There is an exception for security problems and certain other implementation overfitting.
For example, code that depends on a given type _not_ implementing a particular interface
may change behavior when the type adds a new method, which we are allowed to do.
We now have about ten years of experience with Go 1 compatibility.
In general it works very well for the Go team and for developers.
However, there are also practices we've developed since then
that it doesn't capture (specifically GODEBUG settings),
and there are still times when developers' programs break.
I think it is worth extending our approach to try to break programs even less often,
as well as to explicitly codify GODEBUG settings
and clarify when they are and are not appropriate.
As background, I've been talking to the Kubernetes team
about their experiences with Go.
It turns out that Go's been averaging about one Kubernetes-breaking
change per year for the past few years.
I don't think Kubernetes is an outlier here:
I expect most large projects have similar experiences.
Once per year is not high, but it's not zero either,
and our goal with Go 1 compatibility is zero.
Here are some examples of Kubernetes-breaking changes that we've made:
- [Go 1.17 changed net.ParseIP](https://go.dev/doc/go1.17#net)
to reject addresses with leading zeros, like 0127.0000.0000.0001.
Go interpreted them as decimal, following some RFCs,
while all BSD-derived systems interpret them as octal.
Rejecting them avoids taking part in parser misalignment bugs.
(Here is an [arguably exaggerated security report](https://github.com/sickcodes/security/blob/master/advisories/SICK-2021-016.md).)
Kubernetes clusters may have stored configs using such addresses,
so this bug [required them to make a copy of the parsers](https://github.com/kubernetes/kubernetes/issues/100895)
in order to keep accessing old data.
In the interim, they were blocked from updating to Go 1.17.
- [Go 1.15 changed crypto/x509](https://go.dev/doc/go1.15#commonname)
not to fall back to a certificate's CN field to find a host name when the SAN field was omitted.
The old behavior was preserved when using `GODEBUG=x509ignoreCN=0`.
[Go 1.17 removed support for that setting](https://go.dev/doc/go1.17#crypto/x509).
The Go 1.15 change [broke a Kubernetes test](https://github.com/kubernetes/kubernetes/pull/93426)
and [required a warning to users in Kubernetes 1.19 release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#api-change-4).
The [Kubernetes 1.23 release notes](https://github.com/kubernetes/kubernetes/blob/776cff391524478b61212dbb6ea48c58ab4359e1/CHANGELOG/CHANGELOG-1.23.md#no-really-you-must-read-this-before-you-upgrade)
warned users who were using the GODEBUG override that it was gone.
- [Go 1.18 dropped support for SHA1 certificates](https://go.dev/doc/go1.18#sha1),
with a `GODEBUG=x509sha1=1` override.
We announced removal of that setting for Go 1.19
but changed plans on request from Kubernetes.
SHA1 certificates are apparently still used by some enterprise CAs
for on-prem Kubernetes installations.
- [Go 1.19 changed LookPath behavior](https://go.dev/doc/go1.19#os-exec-path)
to remove an important class of security bugs,
but the change may also break existing programs,
so we included a `GODEBUG=execerrdot=0` override.
The impact of this change on Kubernetes is still uncertain:
the Kubernetes developers flagged it as risky enough to warrant further investigation.
These kinds of behavioral changes don't only cause pain for Kubernetes developers and users.
They also make it impossible to update older, long-term-supported versions
of Kubernetes to a newer version of Go.
Those older versions don't have the same access to performance improvements and bug fixes.
Again, this is not specific to Kubernetes.
I am sure lots of projects are in similar situations.
As the examples show, over time we've adopted a practice
of being able to opt out of these risky changes using `GODEBUG` settings.
The examples also show that we have probably been too aggressive
about removing those settings.
But the settings themselves have clearly become an important part of Go's compatibility story.
Other important compatibility-related GODEBUG settings include:
- `GODEBUG=asyncpreemptoff=1` disables signal-based goroutine preemption, which occasionally uncovers operating system bugs.
- `GODEBUG=cgocheck=0` disables the runtime's cgo pointer checks.
- `GODEBUG=cpu.<extension>=off` disables use of a particular CPU extension at run time.
- `GODEBUG=http2client=0` disables client-side HTTP/2.
- `GODEBUG=http2server=0` disables server-side HTTP/2.
- `GODEBUG=netdns=cgo` forces use of the cgo resolver.
- `GODEBUG=netdns=go` forces use of the Go DNS resolver
Programs that need one to use these can usually set
the GODEBUG variable in `func init` of package main,
but for runtime variables, that's too late:
the runtime reads the variable early in Go program startup,
before any of the user program has run yet.
For those programs, the environment variable must be set in the execution environment.
It cannot be “carried with” the program.
Another problem with the GODEBUGs is that you have to know they exist.
If you have a large system written for Go 1.17 and want to update to Go 1.18's toolchain,
you need to know which settings to flip to keep as close to Go 1.17 semantics as possible.
I believe that we should make it even easier and safer
for large projects like Kubernetes to update to new Go releases.
See also my [talk on this topic at GopherCon](https://www.youtube.com/watch?v=v24wrd3RwGo).
## Proposal
I propose that we formalize and expand our use of GODEBUG to provide
compatibility beyond what is guaranteed by the current
[compatibility guidelines](https://go.dev/doc/go1compat).
Specifically, I propose that we:
1. Commit to always adding a GODEBUG setting for changes
allowed by the compatibility guidelines but that
nonetheless are likely to break a significant number of real programs.
2. Guarantee that GODEBUG settings last for at least 2 years (4 releases).
That is only a minimum; some, like `http2server`, will likely last forever.
3. Provide a runtime/metrics counter `/godebug/non-default-behavior/<name>:events`
to observe non-default-behavior due to GODEBUG settings.
4. Set the default GODEBUG settings based on the `go` line the main module's go.mod,
so that updating to a new Go toolchain with an unmodified go.mod
mimics the older release.
5. Allow overriding specific default GODEBUG settings in the source code for package main
using one or more lines of the form
//go:debug <name>=<value>
The GODEBUG environment variable set when a programs runs
would continue to override both these lines
and the default inferred from the go.mod `go` line.
An unrecognized //go:debug setting is a build error.
6. Adjust the `go/build` API to report these new `//go:debug` lines. Specifically, add this type:
type Comment struct {
Pos token.Position
Text string
}
and then in type `Package` we would add a new field
Directives []Comment
This field would collect all `//go:*` directives before the package line, not just `//go:debug`,
in the hopes of supporting any future need for directives.
7. Adjust `go list` output to have a new field `DefaultGODEBUG string` set for main packages,
reporting the combination of the go.mod-based defaults and the source code overrides,
as well as adding to `Package` new fields `Directives`, `TestDirectives,` and `XTestDirectives`, all of type `[]string`.
8. Add a new `DefaultGODEBUG` setting to `debug.BuildInfo.Settings`,
to be reported by `go version -m` and other tools
that inspect build details.
9. Document these commitments as well as how to use GODEBUG in
the [compatibility guidelines](https://golang.org/doc/go1compat).
## Rationale
The main alternate approach is to keep on doing what we are doing,
without these additions.
That makes it difficult for Kubernetes and other large projects
to update in a timely fashion, which cuts them off from performance improvements
and eventually security fixes.
An alternative way to provide these improvements and fixes would be to
extend Go's release support window to two or more years,
but that would require significantly more work
and would be a serious drag on the Go project overall.
It is better to focus our energy as well as the energy of Go developers
on the latest release.
Making it safer to update to the latest release does just that.
The rest of this section gives the affirmative case for each of the enumerated items
in the previous section.
1. Building on the rest of the compatibility guidelines, this commitment will
give developers added confidence that they can update to a new Go toolchain
safely with minimal disruption to their programs.
2. In the past we have planned to remove a GODEBUG after only a single release.
A single release cycle - six months - may well be too short for some developers,
especially where the GODEBUGs are adjusting settings that affect external
systems, like which protocols are used. For example, Go 1.14 (Feb 2020) removed
NPN support in crypto/tls,
but we patched it back into Google's internal Go toolchain
for almost three years while we waited for updates to
network devices that used NPN.
Today that would probably be a GODEBUG setting, and it would be
an example of something that takes a large company more than
six months to resolve.
3. When a developer is using a GODEBUG override, they need to be able to find out
whether it is safe to remove the override. Obviously testing is a good first step,
but production metrics can confirm what testing seems to show.
If the production systems are reporting zeros for `/godebug/non-default-behavior/<name>`,
that is strong evidence for the safety of removing that override.
4. Having the GODEBUG settings is not enough. Developers need to be able to determine
which ones to use when updating to a new Go toolchain.
Instead of forcing developers to look up what is new from one toolchain to the next,
setting the default to match the `go` line in `go.mod` keeps the program behavior
as close to the old toolchain as possible.
5. When developers do update the `go` line to a new Go version, they may still need to
keep a specific GODEBUG set to mimic an older toolchain.
There needs to be some way to bake that into the build:
it's not okay to make end users set an environment variable to run a program,
and setting the variable in main.main or even main's init can be too late.
The `//go:debug` lines provide a clear way to set those specific GODEBUGs,
presumably alongside comments explaining why they are needed and
when they can be removed.
6. This API is needed for the go command and other tools to scan source files
and find the new `//go:debug` lines.
7. This provides an easy way for developers to understand which default GODEBUG
their programs will be compiled with. It will be particularly useful when switching
from one `go` line to another.
8. This provides an easy way for developers to understand which default GODEBUG
their existing programs have been compiled with.
9. The compatibility documentation should explain all this so developers know about it.
## Compatibility
This entire proposal is about compatibility.
It does not violate any existing compatibility requirements.
It is worth pointing out that the GODEBUG mechanism is appropriate for security deprecations,
such as the SHA1 retirement, but not security fixes, like changing the version of LookPath
used by tools in the Go distribution. Security fixes need to always apply when building with
a new toolchain, not just when the `go` line has been moved forward.
One of the hard rules of point releases is it really must not break anyone,
because we never want someone to be unable to add an urgent security fix
due to some breakage in that same point release or an earlier one in the sequence.
That applies to the security fixes themselves too.
This means it is up to the authors of the security fix to find a fix
that does not require a GODEBUG.
LookPath is a good example.
There was a reported bug affecting go toolchain programs,
and we fixed the bug by making the LookPath change
in a forked copy of os/exec specifically for those programs.
We left the toolchain-wide fix for a major Go release precisely
because of the compatibility issue.
The same is true of net.ParseIP.
We decided it was an important security-hardening fix but on balance
inappropriate for a point release because of the potential for breakage.
It's hard for me to think of a security problem that would be so critical
that it must be fixed in a point release and simultaneously so broad
that the fix fundamentally must break unaffected user programs as collateral damage.
To date I believe we've always found a way to avoid such a fix,
and I think the onus is on those of us preparing security releases to continue to do that.
If this change is made in Go 1.N, then only GODEBUG settings introduced in Go 1.N
will be the first ones that are defaulted differently for earlier go.mod go lines.
Settings introduced in earlier Go versions will be accessible using `//go:debug`
but will not change their defaults based on the go.mod line.
The reason for this is compatibility: we want Go 1.N to behave as close as possible to Go 1.(N-1),
which did not change defaults based on the go.mod line.
To make this concrete, consider the GODEBUG `randautoseed=0`, which is supported in Go 1.20
to simulate Go 1.19 behavior.
When Go 1.20 builds a module that says `go 1.19`, it gets `randautoseed=1` behavior,
because Go 1.20 does not implement this GODEBUG proposal.
It would be strange for Go 1.21 to build the same code and turn on `randautoseed=1` behavior.
Updating from Go 1.19 to Go 1.20 has already incurred the behavior change
and potential breakage.
Updating from Go 1.20 to Go 1.21 should not revert the behavior change
and cause more potential breakage.
Continuing the concrete examples, Go 1.20 introduces a new GODEBUG
zipinsecurepath, which defaults to 1 in Go 1.20 to preserve old behavior
and allow insecure paths (for example absolute paths or paths starting with `../`).
Go 1.21 may change the default to 0, to start rejecting insecure paths in archive/zip.
If so, and if Go 1.21 also implements this GODEBUG proposal,
then modules with `go 1.20` lines compiled with Go 1.21 would keep allowing insecure paths.
Only when those modules update to `go 1.21` would they start rejecting insecure paths.
Of course, they could stay on Go 1.20 and add `//go:debug zipinsecurepath=0` to main
to get just the new behavior early,
and they could also update to Go 1.21 and add `//go:debug zipinsecurepath=1` to main
to opt out of the new behavior.
## Implementation
Overall the implementation is fairly short and straightforward.
Documentation probably outweighs new code.
Russ Cox, Michael Matloob, and Bryan Millls will do the work.
A complete sketch of the implementation is in
[CL 453618](https://go.dev/cl/453618),
[CL 453619](https://go.dev/cl/453619),
[CL 453603](https://go.dev/cl/453603),
[CL 453604](https://go.dev/cl/453604), and
[CL 453605](https://go.dev/cl/453605).
The sketch does not include tests and documentation.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/33974-add-public-lockedfile-pkg.md | # Proposal: make the internal [lockedfile](https://godoc.org/github.com/golang/go/src/cmd/go/internal/lockedfile/) package public
Author(s): [Adrien Delorme]
Last updated: 2019-10-15
Discussion at https://golang.org/issue/33974.
## Abstract
Move already existing code residing in
`golang/go/src/cmd/go/internal/lockedfile` to `x/sync`.
## Background
A few open source Go projects are implementing file locking mechanisms but they
do not seem to be maintained anymore:
* https://github.com/gofrs/flock : This repo has lastly accepted PRs in March
2019, so this implementation may be maintained and we could argue that the
`lockedfile` package API is more ergonomic. Incompatibilities with AIX,
Solaris and Illumos are preventing file locking on both projects, but it
looks like the go team is addressing for `lockedfile`.
* https://github.com/juju/fslock : Note that this implementation is both
unmaintained and LGPL-licensed, so even folks who would like to use it might
not be able to. Also not that this repo [was selected for removal in
2017](https://github.com/juju/fslock/issues/4)
As a result some major projects are doing
their own version of it; ex:
[terraform](https://github.com/hashicorp/terraform/blob/1ff9a540202b8c36e33db950374bbb4495737d8f/states/statemgr/filesystem_lock_unix.go),
[boltdb](https://github.com/boltdb/bolt/search?q=flock&unscoped_q=flock). After
some researches it seemed to us that the already existing and maintained
[lockedfile](https://godoc.org/github.com/golang/go/src/cmd/go/internal/lockedfile/)
package is the best 'open source' version.
File-locking interacts pretty deeply with the `os` package and the system call
library in `x/sys`, so it makes sense for (a subset of) the same owners to
consider the evolution of those packages together.
We think it would benefit the mass to make such a package public: since it's
already being part of the go code and therefore being maintained; it should be
made public.
## Proposal
We propose to copy the golang/go/src/cmd/go/internal/lockedfile to `x/exp`. To
make it public. Not changing any of the named types for now.
Exported names and comments as can be currently found in
[07b4abd](https://github.com/golang/go/tree/07b4abd62e450f19c47266b3a526df49c01ba425/src/cmd/go/internal/lockedfile):
```
// Package lockedfile creates and manipulates files whose contents should only
// change atomically.
package lockedfile
// A File is a locked *os.File.
//
// Closing the file releases the lock.
//
// If the program exits while a file is locked, the operating system releases
// the lock but may not do so promptly: callers must ensure that all locked
// files are closed before exiting.
type File struct {
// contains unexported fields
}
// Create is like os.Create, but returns a write-locked file.
// If the file already exists, it is truncated.
func Create(name string) (*File, error)
// Edit creates the named file with mode 0666 (before umask),
// but does not truncate existing contents.
//
// If Edit succeeds, methods on the returned File can be used for I/O.
// The associated file descriptor has mode O_RDWR and the file is write-locked.
func Edit(name string) (*File, error)
// Transform invokes t with the result of reading the named file, with its lock
// still held.
//
// If t returns a nil error, Transform then writes the returned contents back to
// the file, making a best effort to preserve existing contents on error.
//
// t must not modify the slice passed to it.
func Transform(name string, t func([]byte) ([]byte, error)) (err error)
// Open is like os.Open, but returns a read-locked file.
func Open(name string) (*File, error)
// OpenFile is like os.OpenFile, but returns a locked file.
// If flag implies write access (ie: os.O_TRUNC, os.O_WRONLY or os.O_RDWR), the
// file is write-locked; otherwise, it is read-locked.
func OpenFile(name string, flag int, perm os.FileMode) (*File, error)
// Read reads up to len(b) bytes from the File.
// It returns the number of bytes read and any error encountered.
// At end of file, Read returns 0, io.EOF.
//
// File can be read-locked or write-locked.
func (f *File) Read(b []byte) (n int, err error)
// ReadAt reads len(b) bytes from the File starting at byte offset off.
// It returns the number of bytes read and the error, if any.
// ReadAt always returns a non-nil error when n < len(b).
// At end of file, that error is io.EOF.
//
// File can be read-locked or write-locked.
func (f *File) ReadAt(b []byte, off int64) (n int, err error)
// Write writes len(b) bytes to the File.
// It returns the number of bytes written and an error, if any.
// Write returns a non-nil error when n != len(b).
//
// If File is not write-locked Write returns an error.
func (f *File) Write(b []byte) (n int, err error)
// WriteAt writes len(b) bytes to the File starting at byte offset off.
// It returns the number of bytes written and an error, if any.
// WriteAt returns a non-nil error when n != len(b).
//
// If file was opened with the O_APPEND flag, WriteAt returns an error.
//
// If File is not write-locked WriteAt returns an error.
func (f *File) WriteAt(b []byte, off int64) (n int, err error)
// Close unlocks and closes the underlying file.
//
// Close may be called multiple times; all calls after the first will return a
// non-nil error.
func (f *File) Close() error
// A Mutex provides mutual exclusion within and across processes by locking a
// well-known file. Such a file generally guards some other part of the
// filesystem: for example, a Mutex file in a directory might guard access to
// the entire tree rooted in that directory.
//
// Mutex does not implement sync.Locker: unlike a sync.Mutex, a lockedfile.Mutex
// can fail to lock (e.g. if there is a permission error in the filesystem).
//
// Like a sync.Mutex, a Mutex may be included as a field of a larger struct but
// must not be copied after first use. The Path field must be set before first
// use and must not be change thereafter.
type Mutex struct {
// Path to the well-known lock file. Must be non-empty.
//
// Path must not change on a locked mutex.
Path string
// contains filtered or unexported fields
}
// MutexAt returns a new Mutex with Path set to the given non-empty path.
func MutexAt(path string) *Mutex
// Lock attempts to lock the Mutex.
//
// If successful, Lock returns a non-nil unlock function: it is provided as a
// return-value instead of a separate method to remind the caller to check the
// accompanying error. (See https://golang.org/issue/20803.)
func (mu *Mutex) Lock() (unlock func(), err error)
// String returns a string containing the path of the mutex.
func (mu *Mutex) String() string
```
## Rationale
* The `lockedfile.File` implements a subset of the `os.File` but with file
locking protection.
* The `lockedfile.Mutex` does not implement `sync.Locker`: unlike a
`sync.Mutex`, a `lockedfile.Mutex` can fail to lock (e.g. if there is a
permission error in the filesystem).
* `lockedfile` adds an `Edit` and a `Transform` function; `Edit` is not
currently part of the `file` package. Edit exists to make it easier to
implement locked read-modify-write operation. `Transform` simplifies the act
of reading and then writing to a locked file.
* Making this package public will make it more used. A tiny surge of issues
might come in the beginning; at the benefits of everyone. (Unless it's bug
free !!).
* There exists a https://godoc.org/github.com/rogpeppe/go-internal package that
exports a lot of internal packages from the go repo. But if go-internal
became wildly popular; in order to have a bug fixed or a feature introduced
in; a user would still need to open a PR on the go repo; then the author of
go-internal would need to update the package.
## Compatibility
There are no retro-compatibility issues since this will be a code addition but
ideally we don't want to maintain two copies of this package going forward, and
we probably don't want to vendor `x/exp` into the `cmd` module.
Perhaps that implies that this should go in the `x/sys` or `x/sync` repo instead?
## Implementation
Adrien Delorme plans to do copy the exported types in the proposal section from
`cmd/go/internal/lockedfile` to `x/sync`.
Adrien Delorme plans to change the references to the `lockedfile` package in
`cmd`. |
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/draft-embed.md | # Go command support for embedded static assets (files) — Draft Design
Russ Cox\
Brad Fitzpatrick\
July 2020
This is a **Draft Design**, not a formal Go proposal,
because it describes a potential
[large change](https://research.swtch.com/proposals-large#checklist)
that addresses the same need as many third-party packages
and could affect their implementations (hopefully by simplifying them!).
The goal of circulating this draft design is to collect feedback
to shape an intended eventual proposal.
This design builds upon the [file system interfaces draft design](https://golang.org/s/draft-iofs-design).
We are using this change to experiment with new ways to
[scale discussions](https://research.swtch.com/proposals-discuss)
about large changes.
For this change, we will use
[a Go Reddit thread](https://golang.org/s/draft-embed-reddit)
to manage Q&A, since Reddit’s threading support
can easily match questions with answers
and keep separate lines of discussion separate.
There is a [video presentation](https://golang.org/s/draft-embed-video) of this draft design.
The [prototype code](https://golang.org/s/draft-embed-code) is available for trying out.
## Abstract
There are many tools to embed static assets (files) into Go binaries.
All depend on a manual generation step followed by checking in the
generated files to the source code repository.
This draft design eliminates both of these steps by adding support
for embedded static assets to the `go` command itself.
## Background
There are many tools to embed static assets (files) into Go binaries.
One of the earliest and most popular was
[github.com/jteeuwen/go-bindata](https://pkg.go.dev/github.com/jteeuwen/go-bindata)
and its forks, but there are many more, including (but not limited to!):
- [github.com/alecthomas/gobundle](https://pkg.go.dev/github.com/alecthomas/gobundle)
- [github.com/GeertJohan/go.rice](https://pkg.go.dev/github.com/GeertJohan/go.rice)
- [github.com/go-playground/statics](https://pkg.go.dev/github.com/go-playground/statics)
- [github.com/gobuffalo/packr](https://pkg.go.dev/github.com/gobuffalo/packr)
- [github.com/knadh/stuffbin](https://pkg.go.dev/github.com/knadh/stuffbin)
- [github.com/mjibson/esc](https://pkg.go.dev/github.com/mjibson/esc)
- [github.com/omeid/go-resources](https://pkg.go.dev/github.com/omeid/go-resources)
- [github.com/phogolabs/parcello](https://pkg.go.dev/github.com/phogolabs/parcello)
- [github.com/pyros2097/go-embed](https://pkg.go.dev/github.com/pyros2097/go-embed)
- [github.com/rakyll/statik](https://pkg.go.dev/github.com/rakyll/statik)
- [github.com/shurcooL/vfsgen](https://pkg.go.dev/github.com/shurcooL/vfsgen)
- [github.com/UnnoTed/fileb0x](https://pkg.go.dev/github.com/UnnoTed/fileb0x)
- [github.com/wlbr/templify](https://pkg.go.dev/github.com/wlbr/templify)
- [perkeep.org/pkg/fileembed](https://pkg.go.dev/perkeep.org/pkg/fileembed)
Clearly there is a widespread need for this functionality.
The `go` command is the way Go developers build Go programs.
Adding direct support to the `go` command for the basic functionality
of embedding will eliminate the need for some of these tools and
at least simplify the implementation of others.
### Goals
It is an explicit goal to eliminate the need to generate new
Go source files for the assets and commit those source files to version control.
Another explicit goal is to avoid a language change.
To us, embedding static assets seems like a tooling issue,
not a language issue.
Avoiding a language change also means we avoid the need
to update the many tools that process Go code, among them
goimports, gopls, and staticcheck.
It is important to note that as a matter of both design and policy,
the `go` command _never runs user-specified code during a build_.
This improves the reproducibility, scalability, and security of builds.
This is also the reason that `go generate` is a separate manual step
rather than an automatic one.
Any new `go` command support for embedded static assets
is constrained by that design and policy choice.
Another goal is that the solution apply equally well
to the main package and to its dependencies, recursively.
For example, it would not work to require the developer to list
all embeddings on the `go build` command line,
because that would require knowing the embeddings needed by all
of the dependencies of the program being built.
Another goal is to avoid designing novel APIs for accessing files.
The API for accessing embedded files should be as close as possible
to `*os.File,` the existing standard library API for accessing native operating-system files.
## Design
This design adds direct support for embedded static assets into the go command itself,
building on the file system draft design.
That support consists of:
- A new `//go:embed` comment directive naming the files to embed.
- A new `embed` package, which defines the type `embed.Files`,
the public API for a set of embedded files.
The `embed.Files` implements `fs.FS` from the
[file system interfaces draft design](https://golang.org/s/draft-iofs-design),
making it directly usable with packages
like `net/http` and `html/template`.
- Go command changes to process the directives.
- Changes to `go/build` and `golang.org/x/tools/go/packages` to expose
information about embedded files.
### //go:embed directives
A new package `embed`, described in detail below,
provides the type `embed.Files`.
One or more `//go:embed` directives
above a variable declaration of that type specify which files to embed,
in the form of a glob pattern.
For example:
package server
// content holds our static web server content.
//go:embed image/* template/*
//go:embed html/index.html
var content embed.Files
The `go` command will recognize the directives and
arrange for the declared `embed.Files` variable (in this case, `content`)
to be populated with the matching files from the file system.
The `//go:embed` directive accepts multiple space-separated
glob patterns for brevity, but it can also be repeated,
to avoid very long lines when there are many patterns.
The glob patterns are in the syntax of `path.Match`;
they must be unrooted, and they are interpreted
relative to the package directory containing the source file.
The path separator is a forward slash, even on Windows systems.
To allow for naming files with spaces in their names,
patterns can be written as Go double-quoted or back-quoted string literals.
If a pattern names a directory, all files in the subtree rooted
at that directory are embedded (recursively),
so the above example is equivalent to:
package server
// content is our static web server content.
//go:embed image template html/index.html
var content embed.Files
An `embed.Files` variable can be exported or unexported,
depending on whether the package wants to make the file set
available to other packages.
Similarly, an `embed.Files` variable can be a global or a local variable,
depending on what is more convenient in context.
- When evaluating patterns, matches for empty directories are ignored
(because empty directories are never packaged into a module).
- It is an error for a pattern not to match any file or non-empty directory.
- It is _not_ an error to repeat a pattern or for multiple patterns to match
a particular file; such a file will only be embedded once.
- It is an error for a pattern to contain a `..` path element.
- It is an error for a pattern to contain a `.` path element
(to match everything in the current directory, use `*`).
- It is an error for a pattern to match files outside the current module
or that cannot be packaged into a module, like `.git/*` or symbolic links
(or, as noted above, empty directories).
- It is an error for a `//go:embed` directive to appear except
before a declaration of an `embed.Files`.
(More specifically, each `//go:embed` directive must be followed by
a `var` declaration of a variable of type `embed.Files`, with only blank lines
and other `//`-comment-only lines between the `//go:embed` and the declaration.)
- It is an error to use `//go:embed` in a source file that does not import
`"embed"`
(the only way to violate this rule involves type alias trickery).
- It is an error to use `//go:embed` in a module declaring a Go version
before Go 1._N_, where _N_ is the Go version that adds this support.
- It is _not_ an error to use `//go:embed` with local variables declared in functions.
- It is _not_ an error to use `//go:embed` in tests.
- It is _not_ an error to declare an `embed.Files` without a `//go:embed` directive.
That variable simply contains no embedded files.
### The embed package
The new package `embed` defines the `Files` type:
// A Files provides access to a set of files embedded in a package at build time.
type Files struct { … }
The `Files` type provides an `Open` method that opens an embedded file, as an `fs.File`:
func (f Files) Open(name string) (fs.File, error)
By providing this method, the `Files` type implements `fs.FS` and can be used with utility functions
such as `fs.ReadFile`, `fs.ReadDir`, `fs.Glob`, and `fs.Walk`.
As a convenience for the most common operation on embedded files, the `Files` type also provides a `ReadFile` method:
func (f Files) ReadFile(name string) ([]byte, error)
Because `Files` implements `fs.FS`, a set of embedded files can also
be passed to `template.ParseFS`, to parse embedded templates,
and to `http.HandlerFS`, to serve a set of embedded files over HTTP.
### Go command changes
The `go` command will change to process `//go:embed` directives
and pass appropriate information to the compiler and linker
to carry out the embedding.
The `go` command will also add six new fields to the `Package` struct
exposed by `go list`:
EmbedPatterns []string
EmbedFiles []string
TestEmbedPatterns []string
TestEmbedFiles []string
XTestEmbedPatterns []string
XTestEmbedFiles []string
The `EmbedPatterns` field lists all the patterns found on `//go:embed` lines
in the package’s non-test source files; `TestEmbedPatterns` and `XTestEmbedPatterns`
list the patterns in the package’s test source files (internal and external tests, respectively).
The `EmbedFiles` field lists all the files, relative to the package directory,
matched by the `EmbedPatterns`; it does not specify which files match which pattern,
although that could be reconstructed using `path.Match`.
Similarly, `TestEmbedFiles` and `XTestEmbedFiles` list the files matched by `TestEmbedPatterns` and `XTestEmbedPatterns`.
These file lists contain only files; if a pattern matches a directory, the file list
includes all the files found in that directory subtree.
### go/build and golang.org/x/tools/go/packages
In the `go/build` package, the `Package` struct adds only
`EmbedPatterns`, `TestEmbedPatterns`, and `XTestEmbedPatterns`,
not `EmbedFiles`, `TestEmbedFiles`, or `XTestEmbedFiles`,
because the `go/build` package does not take on the job of
matching patterns against a file system.
In the `golang.org/x/tools/go/packages` package,
the `Package` struct adds one new field:
`EmbedFiles` lists the embedded files.
(If embedded files were added to `OtherFiles`,
it would not be possible to tell whether a file with a valid
source extension in that list—for example, `x.c`—was
being built or embedded or both.)
## Rationale
As noted above, the Go ecosystem has many tools for embedding static assets,
too many for a direct comparison to each one.
Instead, this section lays out the affirmative rationale in favor of each of the
parts of the design.
Each subsection also addresses the points raised in the helpful preliminary discussion on
golang.org/issue/35950.
(The Appendix at the end of this document makes direct comparisons with a few existing tools
and examines how they might be simplified.)
It is worth repeating the goals and constraints mentioned in the background section:
- No generated Go source files.
- No language change, so no changes to tools processing Go code.
- The `go` command does not run user code during `go build`.
- The solution must apply as well to dependency packages as it does to the main package.
- The APIs for accessing embedded files should be close to those for operating-system files.
### Approach
The core of the design is the new `embed.Files` type
annotated at its use with the new `//go:embed` directive:
//go:embed *.jpg
var jpgs embed.Files
This is different from the two approaches mentioned
at the start of the preliminary discussion on golang.org/issue/35950.
In some ways it is a combination of the best parts of each.
The first approach mentioned was a directive along the lines of
//go:genembed Logo logo.jpg
that would be replaced by a generated `func Logo() []byte` function,
or some similar accessor.
A significant drawback of this approach is that it changes the way
programs are type-checked: you can’t type-check a call to `Logo`
unless you know what that directive turns into.
There is also no obvious place to write the documentation
for the new `Logo` function.
In effect, this new directive ends up being a full language change:
all tools processing Go code have to be updated to understand it.
The second approach mentioned was to have a new importable `embed`
package with standard Go function definitions,
but the functions are in effect executed at compile time, as in:
var Static = embed.Dir("static")
var Logo = embed.File("images/logo.jpg")
var Words = embed.CompressedReader("dict/words")
This approach fixes the type-checking problem—it is not a full
language change—but it still has significant implementation complexity.
The `go` command would need to parse the entire Go source file
to understand which files need to be made available for embedding.
Today it only parses up to the import block, never full Go expressions.
It would also be unclear to users what constraints are placed on the
arguments to these special calls: they look like ordinary Go calls
but they can only take string literals, not strings computed by Go code,
and probably not even named constants (or else the `go` command
would need a full Go expression evaluator).
Much of the preliminary discussion focused on
deciding between these two approaches.
This design combines the two and avoids the drawbacks of each.
The `//go:embed` comment directive follows the established convention
for Go build system and compiler directives.
The directive is easy for the `go` command to find,
and it is clear immediately that the directive can’t refer to
a string computed by a function call, nor to a named constant.
The `embed.Files` type is plain Go code,
defined in a plain Go package `embed`.
All tools that type-check Go code or run other analysis on it
can understand the code without any special handling of the `//go:embed` directive.
The explicit variable declaration provides a clear place
to write documentation:
// jpgs holds the static images used on the home page.
//go:embed *.jpg
var jpgs embed.Files
(As of Go 1.15, the `//go:embed` line is not considered part of the doc comment.)
The explicit variable declaration also provides a clear way to
control whether the `embed.Files` is exported.
A data-only package might do nothing but export embedded files, like:
package web
// Styles holds the CSS files shared among all our websites.
//go:embed style/*.css
var Styles embed.Files
#### Modules versus packages
In the preliminary discussion, a few people suggested specifying embedded files
using a new directive in `go.mod`.
The design of Go modules, however, is that `go.mod` serves only to describe
information about the module’s version requirements,
not other details of a particular package.
It is not a collection of general-purpose metadata.
For example, compiler flags or build tags would be inappropriate
in `go.mod`.
For the same reason, information about one package’s embedded files
is also inappropriate in `go.mod`:
each package’s individual meaning should be defined by its Go sources.
The `go.mod` is only for deciding which versions of other packages
are used to resolve imports.
Placing the embedding information in the package has benefits
that using `go.mod` would not, including the explicit declaration of
the file set, control over exportedness, and so on.
#### Glob patterns
It is clear that there needs to be some way to give a pattern of files to include,
such as `*.jpg`.
This design adopts glob patterns as the single way to name files for inclusion.
Glob patterns are common to developers from command shells,
and they are already well-defined in Go, in the APIs for `path.Match`, `filepath.Match`,
and `filepath.Glob`.
Nearly all file names are valid glob patterns matching only themselves;
using globs avoids the need for separate `//go:embedfile` and `//go:embedglob`
directives.
(This would not be the case if we used, say, Go regular expressions
as provided by the `regexp` package.)
#### Directories versus \*\* glob patterns
In some systems, the glob pattern `**` is like `*` but can match multiple path elements.
For example `images/**.jpg` matches all `.jpg` files in the directory tree rooted at `images/`.
This syntax is not available in Go’s `path.Match` or in `filepath.Glob`,
and it seems better to use the available syntax than to define a new one.
The rule that matching a directory includes all files in that directory tree
should address most of the need for `**` patterns.
For example, `//go:embed images` instead of `//go:embed images/**.jpg`.
It’s not exactly the same, but hopefully good enough.
If at some point in the future it becomes clear that `**` glob patterns
are needed, the right way to support them would be to add them to
`path.Match` and `filepath.Glob`; then the `//go:embed` directives
would get them for free.
#### Dot-dot, module boundaries, and file name restrictions
In order to build files embedded in a dependency,
the raw files themselves must be included in module zip files.
This implies that any embedded file must be in the module’s own file tree.
It cannot be in a parent directory above the module root (like `../../../etc/passwd`),
it cannot be in a subdirectory that contains a different module,
and it cannot be in a directory that would be left out of the module (like `.git`).
Another implication is that it is not possible to embed two different
files that differ only in the case of their file names,
because those files would not be possible to extract on a
case-insensitive system like Windows or macOS.
So you can’t embed two files with different casings, like this:
//go:embed README readme
But `//go:embed dir/README other/readme` is fine.
Because `embed.Files` implements `fs.FS`, it cannot provide access
to files with names beginning with `..`, so files in parent directories
are also disallowed entirely, even when the parent directory named by `..`
does happen to be in the same module.
#### Codecs and other processing
The preliminary discussion raised a large number of possible transformations
that might be applied to files before embedding,
including:
data compression,
JavaScript minification,
TypeScript compilation,
image resizing,
generation of sprite maps,
UTF-8 normalization,
and
CR/LF normalization.
It is not feasible for the `go` command to anticipate or include
all the possible transformations that might be desirable.
The `go` command is also not a general build system;
in particular, remember the design constraint that it never
runs user programs during a build.
These kinds of transformations are best left to an external
build system, such as Make or Bazel,
which can write out the exact bytes that the `go` command
should embed.
A more limited version of this suggestion was to gzip-compress
the embedded data and then make that compressed form available
for direct use in HTTP servers as gzipped response content.
Doing this would force the use of (or at least support for)
gzip and compressed content, making it harder to adjust the
implementation in the future as we learn more about how well it works.
Overall this seems like overfitting to a specific use case.
The simplest approach is for Go’s embedding feature to store
plain files, let build systems or third-party packages take care of preprocessing before the build
or postprocessing at runtime.
That is, the design focuses on providing the core functionality of
embedding raw bytes into the binary for use at run-time,
leaving other tools and packages to build on a solid foundation.
### Compression to reduce binary size
A popular question in the preliminary discussion was whether
the embedded data should be stored in compressed or
uncompressed form in the binary.
This design carefully avoids assuming an answer to that question.
Instead, whether to compress can be left as an implementation detail.
Compression carries the obvious benefit of smaller binaries.
However, it also carries some less obvious costs.
Most compression formats (in particular gzip and zip)
do not support random access to the uncompressed data,
but an `http.File` needs random access (`ReadAt`, `Seek`)
to implement range requests.
Other uses may need random access as well.
For this reason, many of the popular embedding tools
start by decompressing the embedded data at runtime.
This imposes a startup CPU cost and a memory cost.
In contrast, storing the embedded data uncompressed
in the binary supports random access with no startup CPU cost.
It also reduces memory cost:
the file contents are never stored in the garbage-collected heap,
and the operating system efficiently pages in necessary data
from the executable as that data is accessed,
instead of needing to load it all at once.
Most systems have more disk than RAM.
On those systems, it makes very little sense to
make binaries smaller at the cost of using more memory (and more CPU) at run time.
On the other hand, projects like [TinyGo](https://tinygo.org/) and [U-root](https://u-root.org/)
target systems with more RAM than disk or flash.
For those projects, compressing assets and using
incremental decompression at runtime could provide
significant savings.
Again, this design allows compression to be left as an
implementation detail.
The detail is not decided by each package author
but instead could be decided when building the final binary.
Future work might be to add `-embed=compress`
as a `go` build option for use in limited environments.
### Go command changes
Other than support for `//go:embed` itself,
the only user-visible `go` command change
is new fields exposed in `go list` output.
It is important for tools that process Go packages to be able
to understand what files are needed for a build.
The `go list` command is the underlying mechanism
used now, even by `golang.org/x/tools/go/packages`.
Exposing the embedded files as a new field in `Package` struct
used by `go list`
makes them available both for direct use and
for use by higher level APIs.
#### Command-line configuration
In the preliminary discussion, a few people suggested that
the list of embedded files could be specified on the `go build`
command line.
This could potentially work for files embedded in the main package,
perhaps with an appropriate Makefile.
But it would fail badly for dependencies:
if a dependency wanted to add a new embedded file,
all programs built with that dependency would need
to adjust their build command lines.
#### Potential confusion with go:generate
In the preliminary discussion, a few people pointed out that
developers might be confused by the inconsistency that `//go:embed` directives
are processed during builds but `//go:generate` directives are not.
There are other special comment directives as well: `//go:noinline`, `//go:noescape`, `// +build`, `//line`.
All of these are processed during builds.
The exception is `//go:generate`,
because of the design constraint that the `go` command
not run user code during builds.
The `//go:embed` is not the special case, nor does it make
`//go:generate` any more of a special case.
For more about `go generate`,
see the [original proposal](https://docs.google.com/document/d/1V03LUfjSADDooDMhe-_K59EgpTEm3V8uvQRuNMAEnjg/edit)
and [discussion](https://groups.google.com/g/golang-dev/c/ZTD1qtpruA8).
### The embed package
#### Import path
The new `embed` package provides access to embedded files.
Previous additions to the standard library
have been made in `golang.org/x` first, to make them
available to earlier versions of Go.
However, it would not make sense to use `golang.org/x/embed`
instead of `embed`:
the older versions of Go could import `golang.org/x/embed`
but still not be able to embed files without the newer `go` command support.
It is clearer for a program using `embed` to fail to compile
than it would be to compile but not embed any files.
#### File API
Implementing `fs.FS` enables hooking into `net/http`, `text/template`, and `html/template`,
without needing to make those packages aware of `embed`.
Code that wants to change between using operating system files and
embedded files can be written in terms of `fs.FS` and `fs.File`
and then use `os.DirFS` as an `fs.FS` or use a `*os.File` directly as an `fs.File`.
#### Direct access to embedded data
An obvious extension would be to add to `embed.Files`
a `ReadFileString` method that returns the file content as a string.
If the embedded data were stored in the binary uncompressed,
`ReadFileString` would be very efficient: it could return a string
pointing into the in-binary copy of the data.
Callers expecting zero allocation in `ReadFileString`
might well preclude a future `-embed=compress` mode that
trades binary size for access time, which could not provide
the same kind of efficient direct access to raw uncompressed data.
An explicit `ReadFileString` method would also make it more
difficult to convert code using `embed.Files` to use other `fs.FS`
implementations, including operating system files.
For now, it seems best to omit a `ReadFileString` method,
to avoid exposing the underlying representation
and also to avoid diverging from `fs.FS`.
Another extension would be to add to the returned `fs.File` a `WriteTo` method.
All the arguments against `ReadFileString` apply equally well to `WriteTo`.
An additional reason to avoid `WriteTo` is that it would expose the
uncompressed data in a mutable form, `[]byte` instead of `string`.
The price of this flexibility—both the flexibility to
move easily between `embed.Files` and other file systems
and also the flexibility to add `-embed=compress` later
(perhaps that would useful for TinyGo)—is that access to data requires making a copy.
This is at least no less efficient than reading from other file sources.
#### Writing embedded files to disk
In the preliminary discussion, one person asked about making it
easy to write embedded files back to disk at runtime, to make
them available for use with the HTTP server, template parsing, and so on.
While this is certainly possible to do,
we probably should avoid that as the suggested way to use
embedded files:
many programs run with limited or no access to writable disk.
Instead, this design builds on the [file system draft design](https://golang.org/s/draft-iofs-design)
to make the embedded files available to those APIs.
## Compatibility
This is all new API.
There are no conflicts with the [compatibility guidelines](https://golang.org/doc/go1compat).
It is worth noting that, as with all new API, this functionality cannot be adopted
by a Go project until all developers building the project have updated to the
version of Go that supports the API.
This may be a particularly important concern for authors of libraries.
If this functionality ships in Go 1.15, library authors may wish to wait
to adopt it until they are confident that all their users have updated to
Go 1.15.
## Implementation
The implementation details are not user-visible
and do not matter nearly as much as the rest of the design.
A [prototype implementation](https://golang.org/s/draft-iofs-code) is available.
## Appendix: Comparison with other tools
A goal of this design is to eliminate much of the effort involved
in embedding static assets in Go binaries.
It should be able to replace the common uses of most of
the available embedding tools.
Replacing all possible uses is a non-goal.
Replacing all possible embedding tools is also a non-goal.
This section examines a few popular embedding tools
and compares and contrasts them with this design.
### go-bindata
One of the earliest and simplest generators for static assets is
[`github.com/jteeuwen/go-bindata`](https://pkg.go.dev/github.com/jteeuwen/go-bindata?tab=doc).
It is no longer maintained, so now there are many forks and derivatives,
but this section examines the original.
Given an input file `hello.txt` containing the single line `hello, world`,
`go-bindata hello.txt` produces 235 lines of Go code.
The generated code exposes this exported API (in the package where it is run):
```
func Asset(name string) ([]byte, error)
Asset loads and returns the asset for the given name. It returns an error if
the asset could not be found or could not be loaded.
func AssetDir(name string) ([]string, error)
AssetDir returns the file names below a certain directory embedded in the
file by go-bindata. For example if you run go-bindata on data/... and data
contains the following hierarchy:
data/
foo.txt
img/
a.png
b.png
then AssetDir("data") would return []string{"foo.txt", "img"}
AssetDir("data/img") would return []string{"a.png", "b.png"}
AssetDir("foo.txt") and AssetDir("notexist") would return an error
AssetDir("") will return []string{"data"}.
func AssetInfo(name string) (os.FileInfo, error)
AssetInfo loads and returns the asset info for the given name. It returns an
error if the asset could not be found or could not be loaded.
func AssetNames() []string
AssetNames returns the names of the assets.
func MustAsset(name string) []byte
MustAsset is like Asset but panics when Asset would return an error. It
simplifies safe initialization of global variables.
func RestoreAsset(dir, name string) error
RestoreAsset restores an asset under the given directory
func RestoreAssets(dir, name string) error
RestoreAssets restores an asset under the given directory recursively
```
This code and exported API is duplicated in every package using `go-bindata`-generated output.
One benefit of this design is that the access code can be in a single package
shared by all clients.
The registered data is gzipped. It must be decompressed when accessed.
The `embed` API provides all this functionality
except for “restoring” assets back to the local file system.
See the “Writing embedded assets to disk” section above
for more discussion about why it makes sense to leave that out.
### statik
Another venerable asset generator is
[github.com/rakyll/statik](https://pkg.go.dev/github.com/rakyll/statik).
Given an input file `public/hello.txt` containing the single line `hello, world`,
running `statik` generates a subdirectory `statik` containing an
import-only package with a `func init` containing a single call,
to register the data for asset named `"hello.txt"` with
the access package [github.com/rakyll/statik/fs](https://pkg.go.dev/github.com/rakyll/statik).
The use of a single shared registration introduces the possibility
of naming conflicts: what if multiple packages want to embed
different static `hello.txt` assets?
Users can specify a namespace when running `statik`,
but the default is that all assets end up in the same namespace.
This design avoids collisions and explicit namespaces by keeping
each `embed.Files` separate: there is no global state
or registration.
The registered data in any given invocation is a string containing
the bytes of a single zip file holding all the static assets.
Other than registration calls, the `statik/fs` package includes this API:
```
func New() (http.FileSystem, error)
New creates a new file system with the default registered zip contents data.
It unzips all files and stores them in an in-memory map.
func NewWithNamespace(assetNamespace string) (http.FileSystem, error)
NewWithNamespace creates a new file system with the registered zip contents
data. It unzips all files and stores them in an in-memory map.
func ReadFile(hfs http.FileSystem, name string) ([]byte, error)
ReadFile reads the contents of the file of hfs specified by name. Just as
ioutil.ReadFile does.
func Walk(hfs http.FileSystem, root string, walkFn filepath.WalkFunc) error
Walk walks the file tree rooted at root, calling walkFn for each file or
directory in the tree, including root. All errors that arise visiting files
and directories are filtered by walkFn.
As with filepath.Walk, if the walkFn returns filepath.SkipDir, then the
directory is skipped.
```
The `embed` API provides all this functionality
(converting to `http.FileSystem`, reading a file, and walking the files).
Note that accessing any single file requires first decompressing
all the embedded files. The decision in this design to avoid
compression is discussed more above, in the
“Compression to reduce binary size” section.
### go.rice
Another venerable asset generator is
[github.com/GeertJohan/go.rice](https://github.com/GeertJohan/go.rice).
It presents a concept called a `rice.Box`
which is like an `embed.Files` filled from a specific file system directory.
Suppose `box/hello.txt` contains `hello world` and `hello.go` is:
package main
import rice "github.com/GeertJohan/go.rice"
func main() {
rice.FindBox("box")
}
The command `rice embed-go` generates a 44-line file `rice-box.go` that
calls `embedded.RegisterEmbeddedBox` to registers a box named `box` containing
the single file `hello.txt`.
The data is uncompressed.
The registration means that `go.rice` has the same possible
collisions as `statik`.
The `rice embed-go` command parses the Go source file `hello.go`
to find calls to `rice.FindBox` and then uses the argument as both
the name of the box and the local directory containing its contents.
This approach is similar to the “second approach” identified in the preliminary
discussion, and it demonstrates all the drawbacks suggested above.
In particular, only the first of these variants works with the `rice` command:
rice.FindBox("box")
rice.FindBox("b" + "o" + "x")
const box = "box"
rice.FindBox(box)
func box() string { return "box" }
rice.FindBox(box())
As the Go language is defined, these should all do the same thing.
The limitation to the first form is fine in an opt-in tool,
but it would be problematic to impose in the standard toolchain,
because it would break the orthogonality of language concepts.
The API provided by the `rice` package is:
```
type Box struct {
// Has unexported fields.
}
Box abstracts a directory for resources/files. It can either load files from
disk, or from embedded code (when `rice --embed` was ran).
func FindBox(name string) (*Box, error)
FindBox returns a Box instance for given name. When the given name is a
relative path, it’s base path will be the calling pkg/cmd’s source root.
When the given name is absolute, it’s absolute. derp. Make sure the path
doesn’t contain any sensitive information as it might be placed into
generated go source (embedded).
func MustFindBox(name string) *Box
MustFindBox returns a Box instance for given name, like FindBox does. It
does not return an error, instead it panics when an error occurs.
func (b *Box) Bytes(name string) ([]byte, error)
Bytes returns the content of the file with given name as []byte.
func (b *Box) HTTPBox() *HTTPBox
HTTPBox creates a new HTTPBox from an existing Box
func (b *Box) IsAppended() bool
IsAppended indicates whether this box was appended to the application
func (b *Box) IsEmbedded() bool
IsEmbedded indicates whether this box was embedded into the application
func (b *Box) MustBytes(name string) []byte
MustBytes returns the content of the file with given name as []byte. panic’s
on error.
func (b *Box) MustString(name string) string
MustString returns the content of the file with given name as string.
panic’s on error.
func (b *Box) Name() string
Name returns the name of the box
func (b *Box) Open(name string) (*File, error)
Open opens a File from the box If there is an error, it will be of type
*os.PathError.
func (b *Box) String(name string) (string, error)
String returns the content of the file with given name as string.
func (b *Box) Time() time.Time
Time returns how actual the box is. When the box is embedded, it’s value is
saved in the embedding code. When the box is live, this methods returns
time.Now()
func (b *Box) Walk(path string, walkFn filepath.WalkFunc) error
Walk is like filepath.Walk() Visit http://golang.org/pkg/path/filepath/#Walk
for more information
```
```
type File struct {
// Has unexported fields.
}
File implements the io.Reader, io.Seeker, io.Closer and http.File interfaces
func (f *File) Close() error
Close is like (*os.File).Close() Visit http://golang.org/pkg/os/#File.Close
for more information
func (f *File) Read(bts []byte) (int, error)
Read is like (*os.File).Read() Visit http://golang.org/pkg/os/#File.Read for
more information
func (f *File) Readdir(count int) ([]os.FileInfo, error)
Readdir is like (*os.File).Readdir() Visit
http://golang.org/pkg/os/#File.Readdir for more information
func (f *File) Readdirnames(count int) ([]string, error)
Readdirnames is like (*os.File).Readdirnames() Visit
http://golang.org/pkg/os/#File.Readdirnames for more information
func (f *File) Seek(offset int64, whence int) (int64, error)
Seek is like (*os.File).Seek() Visit http://golang.org/pkg/os/#File.Seek for
more information
func (f *File) Stat() (os.FileInfo, error)
Stat is like (*os.File).Stat() Visit http://golang.org/pkg/os/#File.Stat for
more information
```
```
type HTTPBox struct {
*Box
}
HTTPBox implements http.FileSystem which allows the use of Box with a
http.FileServer.
e.g.: http.Handle("/", http.FileServer(rice.MustFindBox("http-files").HTTPBox()))
func (hb *HTTPBox) Open(name string) (http.File, error)
Open returns a File using the http.File interface
```
As far as public API, `go.rice` is very similar to this design.
The `Box` itself is like `embed.Files`,
and
the `File` is similar to `fs.File`.
This design avoids `HTTPBox` by building on HTTP support for `fs.FS`.
### Bazel
The Bazel build tool includes support for building Go,
and its [`go_embed_data`](https://github.com/bazelbuild/rules_go/blob/master/go/extras.rst#go-embed-data) rule supports embedding a file as data in a Go program.
It is used like:
go_embed_data(
name = "rule_name",
package = "main",
var = "hello",
src = "hello.txt",
)
or
go_embed_data(
name = "rule_name",
package = "main",
var = "files",
srcs = [
"hello.txt",
"gopher.txt",
],
)
The first form generates a file like:
package main
var hello = []byte("hello, world\n")
The second form generates a file like:
package main
var files = map[string][]byte{
"hello.txt": []byte("hello, world\n"),
"gopher.txt": []byte("ʕ◔ϖ◔ʔ\n"),
}
That’s all. There are configuration knobs to generate `string` instead of `[]byte`,
and to expand zip and tar files into their contents,
but there’s no richer API: just declared data.
Code using this form would likely keep using it: the `embed` API is more complex.
However, it will still be important to support this `//go:embed` design in Bazel.
The way to do that would be to provide a `go tool embed` that generates the
right code and then either adjust the Bazel `go_library` rule to invoke it
or have Gazelle (the tool that reads Go files and generates Bazel rules)
generate appropriate `genrules`.
The details would depend on the eventual Go implementation,
but any Go implementation of `//go:embed` needs to be able to be implemented
in Bazel/Gazelle in some way.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/30411-env.md | # Proposal: `go` command configuration file
Russ Cox
Last Updated: March 1, 2019.
[golang.org/design/30411-env](https://golang.org/design/30411-env)
Discussion at [golang.org/issue/30411](https://golang.org/issue/30411)
## Abstract
Setting environment variables for `go` command configuration
is too difficult and system-specific.
We propose to add `go env -w`, to set defaults more easily.
## Background
The `go` command is configured by environment variables:
see the output of `go env` for a partial list,
and `go help environment` for a longer one.
Although nearly all variables are optional,
it is not uncommon to need to set one or another.
The details of setting an environment variable's initial value
differs by operating system and even by distribution or
terminal program—for example, do you have to log out entirely,
or just restart the shell window?—which can make this environment-based configuration
quite difficult.
(When setting `$GOPATH` was required to get started with Go,
doing so was a major stumbling block for new users.)
It would help all users to have a consistent, simple way to set the
default value for these configuration variables.
## Proposal
We propose to store in the file [`os.UserConfigDir()`](https://golang.org/issue/29960)`+”/go/env”`
a list of key-value pairs giving the default settings for
configuration variables used by the `go` command.
Environment variables, when set, override the settings in this file.
The `go env <NAME> ...` command will continue to report the
effective values of the named configuration variables,
using the current environment, or else the `go.env` file,
or else a computed default.
A new option `go env -w <NAME>=<VALUE> ...` will set one or more
configuration variables in the `go.env` file.
The command will also print a warning if the
current environment has `$<NAME>` defined
and it is not set to `<VALUE>`.
For example, a user who needs to set a default `$GOPATH`
could now use:
go env -w GOPATH=$HOME/mygopath
Another popular setting might be:
go env -w GOBIN=$HOME/bin
The command `go env -u <NAME>...` will unset (delete, remove) entries
in the environment file.
Most users will interact with the `os.UserConfigDir()/go/env` file through the
`go env` and `go env -w` command line syntax,
so the exact stored file format
should not matter too much.
But for concreteness, the format is a sequence
of lines of the form `<NAME>=<VALUE>`,
in which everything after the `=` is a literal, uninterpreted value—no quoting,
no dollar expansion, no multiline values.
Blank lines, lines beginning with `#`, and
lines not containing `=`, are ignored.
If the file contains multiple lines beginning with `<NAME>=`, only the first has any effect.
Lines with empty values set the default value to the empty string,
possibly overriding a non-empty default.
Only the `go` command will consult the `os.UserConfigDir()/go/env` file.
The environment variables that control `go` libraries at runtime—for example,
`GODEBUG`, `GOMAXPROCS`, and `GOTRACEBACK`—will not be read from
`go.env` and will be rejected by `go env` command lines.
## Rationale
The `go` command is already configured by environment variables,
simple `<KEY>=<VALUE>` pairs.
An alternative would be to introduce a richer configuration file format,
such as JSON, TOML, XML, or YAML,
but then we would also need to define how these richer values
can be overridden in certain contexts.
Continuing to use plain `<KEY>=<VALUE>` pairs
aligns better with the existing environment-based approach
and avoids increasing the potential complexity of any particular configuration.
The use of `os.UserConfigDir()` (see [golang.org/issue/29960](https://golang.org/issue/29960))
seems to be the established correct default for most systems.
Traditionally we've stored things in `$GOPATH`, but we want to allow this file to contain
the default value for `$GOPATH`.
It may be necessary—albeit ironic—to add a `GOENV` environment variable
overriding the default location.
Obviously, it would not be possible to set the default for `GOENV` itself in the file.
## Compatibility
There are no compatibility issues.
It may be surprising in some use cases
that an empty environment still uses the `go.env` settings,
but those contexts could avoid creating a `go.env` in the first place,
or (if we add it) set `GOENV=off`.
## Implementation
Russ Cox plans to do the implementation in the `go` command in Go 1.13.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/25530-sumdb.md | # Proposal: Secure the Public Go Module Ecosystem
Russ Cox\
Filippo Valsorda
Last updated: April 24, 2019.
[golang.org/design/25530-sumdb](https://golang.org/design/25530-sumdb)
Discussion at [golang.org/issue/25530](https://golang.org/issue/25530).
## Abstract
We propose to secure the public Go module ecosystem
by introducing a new server, the Go checksum database,
which serves what is in effect a `go.sum` file
listing all publicly-available Go modules.
The `go` command will use this service to fill in gaps
in its own local `go.sum` files,
such as during `go get -u`.
This ensures that unexpected code changes cannot
be introduced when first adding a dependency to a module
or when upgrading a dependency.
The original name for the Go checksum database was “the Go notary,”
but we have stopped using that name to avoid confusion
with the CNCF Notary project, itself written in Go,
not to mention the Apple Notary.
## Background
When you run `go` `get` `rsc.io/quote@v1.5.2`, `go` `get` first fetches
`https://rsc.io/quote?go-get=1` and looks for `<meta>` tags. It finds
<meta name="go-import"
content="rsc.io/quote git https://github.com/rsc/quote">
which tells it the code is in a Git repository on `github.com`.
Next it runs `git clone https://github.com/rsc/quote` to fetch
the Git repository and then extracts the file tree from the `v1.5.2` tag,
producing the actual module archive.
Historically, `go` `get` has always simply assumed that it was downloading
the right code.
An attacker able to intercept the connection to `rsc.io` or `github.com`
(or an attacker able to break into one of those systems, or a malicious module author)
would be able to cause `go` `get` to download different code tomorrow,
and `go` `get` would not notice.
There are
[many challenges in using software dependencies safely](https://research.swtch.com/deps),
and much more vetting should typically be done before taking on a
new dependency, but no amount of vetting is worth anything
if the code you download and vet today
differs from the code you or a collaborator downloads
tomorrow for the “same” module version.
We must be able to authenticate whether a particular
download is correct.
For our purposes, “correct” for a particular module version download
is defined as the same code everyone else downloads.
This definition ensures reproducibility of builds
and makes vetting of specific module versions meaningful,
without needing to attribute specific archives to
specific authors,
and without introducing new potential points of compromise
like per-author keys.
(Also, even the author of a module should not be able to change
the bits associated with a specific version from one day to the next.)
Being able to authenticate a particular module version download
effectively moves code hosting servers like `rsc.io` and `github.com`
out of the trusted computing base of the Go module ecosystem.
With module authentication, those servers could cause availability problems
by not serving a module version anymore,
but they cannot substitute different code.
The introduction of Go module proxies (see `go help goproxy`)
introduces yet another way for an attacker to intercept module downloads;
module authentication eliminates the need to trust those proxies as well,
moving them outside
[trusted computing base](https://www.microsoft.com/en-us/research/publication/authentication-in-distributed-systems-theory-and-practice/).
See the Go blog post “[Go Modules in 2019](https://blog.golang.org/modules2019)”
for additional background.
### Module Authentication with `go.sum`
Go 1.11’s preview of Go modules introduced the `go.sum` file,
which is maintained automatically by the `go` command
in the root of a module tree
and contains cryptographic checksums for the content of each
dependency of that module.
If a module’s source file tree is obtained unmodified,
then the `go.sum` file allows authenticating all dependencies
needed for a build of that module.
It ensures that tomorrow’s builds will use the same exact
code for dependencies that today’s builds did.
Tomorrow’s downloads are authenticated by `go.sum`.
On the other hand, today’s downloads—the ones that add or update
dependencies in the first place—are not authenticated.
When a dependency is first added to a module,
or when a dependency is upgraded to a newer version,
there is no entry for it in `go.sum`,
and the `go` command today blindly trusts that it
downloads the correct code.
Then it records the hash of that code into `go.sum`
to ensure that code doesn’t change tomorrow.
But that doesn’t help the initial download.
The model is similar to SSH’s
“[trust on first use](https://en.wikipedia.org/wiki/Trust_on_first_use),”
and while that approach is an improvement over “trust every time,”
it’s still not ideal,
especially since developers typically download new module versions
far more often than they connect to new, unknown SSH servers.
We are concerned primarily with authenticating downloads
of publicly-available module versions.
We assume that the private servers hosting
private module source code are already within the
trusted computing base of the developers using that code.
In contrast, a developer who wants to use `rsc.io/quote`
should not be required to trust that `rsc.io` is properly secured.
This trust becomes particularly problematic when summed
over all dependencies.
What we need is an easily-accessed `go.sum` file listing every
publicly-available module version.
But we don’t want to blindly trust a downloaded `go.sum` file,
since that would become the next attractive target for an attacker.
### Transparent Logs
The [Certificate Transparency](https://www.certificate-transparency.org/) project
is based on a data structure called a _transparent log_.
The transparent log is hosted on a server and made accessible to clients for random access,
but clients are still able to verify that a particular log record really is in the log
and also that the server never removes any log record from the log.
Separately, third-party auditors can iterate over the log
checking that the entries themselves are accurate.
These two properties combined mean that
a client can use records from the log,
confident that those records will remain available in the log
for auditors to double-check and report invalid or suspicious entries.
Clients and auditors can also compare observations to ensure
that the server is showing the same data to everyone involved.
That is, the log server is not trusted to store the log properly,
nor is it trusted to put the right records into the log.
Instead, clients and auditors interact skeptically with the server,
able to verify for themselves in each interaction
that the server really is behaving correctly.
For details about the data structure, see Russ Cox’s blog post,
“[Transparent Logs for Skeptical Clients](https://research.swtch.com/tlog).”
For a high-level overview of Certificate Transparency
along with additional motivation and context,
see Ben Laurie's ACM Queue article,
“[Certificate Transparency: Public, verifiable, append-only logs](https://queue.acm.org/detail.cfm?id=2668154).”
The use of a transparent log for module hashes aligns with
a broader trend of using transparent logs to enable detection
of misbehavior by partially trusted systems,
what the Trillian team calls
“[General Transparency](https://github.com/google/trillian/#trillian-general-transparency).”
## Proposal
We propose to publish the `go.sum` lines for all publicly-available Go modules
in a transparent log,
served by a new server called the Go checksum database.
When a publicly-available module is not yet listed in
the main module’s `go.sum` file,
the `go` command will fetch the relevant `go.sum` lines
from the checksum database instead of trusting the initial download
to be correct.
### Checksum Database
The Go checksum database will run at `https://sum.golang.org/` and serve the following endpoints:
- `/latest` will serve a signed tree size and hash for the latest log.
- `/lookup/M@V` will serve the log record number for the entry about module M version V,
followed by the data for the record (that is, the `go.sum` lines for module M version V)
and a signed tree hash for a tree that contains the record.
If the module version is not yet recorded in the log, the notary will try to fetch it before replying.
Note that the data should never be used without first
authenticating it against the signed tree hash
and authenticating the signed tree hash against the client's
timeline of signed tree hashes.
- `/tile/H/L/K[.p/W]` will serve a [log tile](https://research.swtch.com/tlog#serving_tiles).
The optional `.p/W` suffix indicates a partial log tile with only `W` hashes.
Clients must fall back to fetching the full tile if a partial tile is not found.
The record data for the leaf hashes in `/tile/H/0/K[.p/W]` are served as `/tile/H/data/K[.p/W]`
(with a literal `data` path element).
Clients are expected to use `/lookup` and `/tile/H/L/...` during normal operations,
while auditors will want to use `/latest` and `/tile/H/data/...`.
A special `go` command may also fetch `/latest` to force incorporation
of that signed tree head into the local timeline.
### Proxying a Checksum Database
A module proxy can also proxy requests to the checksum database.
The general proxy URL form is `<proxyURL>/sumdb/<databaseURL>`.
If `GOPROXY=https://proxy.site` then the latest signed tree would be fetched using
`https://proxy.site/sumdb/sum.golang.org/latest`.
Including the full database URL allows a transition to a new database log,
such as `sum.golang.org/v2`.
Before accessing any checksum database URL using a proxy,
the proxy client should first fetch `<proxyURL>/sumdb/<sumdb-name>/supported`.
If that request returns a successful (HTTP 200) response,
then the proxy supports proxying checksum database requests.
In that case, the client should use the proxied access method only,
never falling back to a direct connection to the database.
If the `/sumdb/<sumdb-name>/supported` check fails with a “not found” (HTTP 404)
or “gone” (HTTP 410) response,
the proxy is unwilling to proxy the checksum database,
and the client should connect directly to the database.
Any other response is treated as the database being unavailable.
A corporate proxy may want to ensure that clients
never make any direct database connections
(for example, for privacy; see the “Rationale” section below).
The optional `/sumdb/supported` endpoint, along with
proxying actual database requests, lets such a proxy
ensure that a `go` command using the proxy
never makes a direct connection to sum.golang.org.
But simpler proxies may wish to focus on serving
only modules and not checksum data—in particular,
module-only proxies can be served from entirely static file systems,
with no special infrastructure at all.
Such proxies can respond with an HTTP 404 or HTTP 410 to
the `/sumdb/supported` endpoint, so that clients
will connect to the database directly.
### `go` command client
The `go` command is the primary consumer of the database’s published log.
The `go` command will [verify the log](https://research.swtch.com/tlog#verifying_a_log)
as it uses it,
ensuring that every record it reads is actually in the log
and that no observed log ever drops a record from an earlier observed log.
The `go` command will refer to `$GOSUMDB` to find the name and public key
of the Go checksum database.
That variable will default to the `sum.golang.org` server.
The `go` command will cache the latest signed tree size and tree hash
in `$GOPATH/pkg/sumdb/<sumdb-name>/latest`.
It will cache lookup results and tiles in
`$GOPATH/pkg/mod/download/cache/sumdb/<sumdb-name>/lookup/path@version`
and `$GOPATH/pkg/mod/download/cache/sumdb/<sumdb-name>/tile/H/L/K[.W]`.
(More generally, `https://<sumdb-URL>` is cached
in `$GOPATH/pkg/mod/download/cache/sumdb/<sumdb-URL>`.)
This way, `go clean -modcache` deletes cached lookup results and tiles
but not the latest signed tree hash, which should be preserved for
detection of timeline inconsistency.
No `go` command (only a manual `rm -rf $GOPATH/pkg`)
will wipe out the memory of the latest observed tree size and hash.
If the `go` command ever does observe a pair of inconsistent signed tree sizes and hashes,
it will complain loudly on standard error and fail the build.
The `go` command must be configured to know which modules are
publicly available and therefore can be looked up in the checksum database,
versus those that are closed source and must not be looked up,
especially since that would transmit potentially private import paths
over the network to the database `/lookup` endpoint.
A few new environment variables control this configuration.
(See the [`go env -w` proposal](https://golang.org/design/30411-env),
now available in the Go 1.13 development branch,
for a way to manage these variables more easily.)
- `GOPROXY=https://proxy.site/path` sets the Go module proxy to use, as before.
- `GONOPROXY=prefix1,prefix2,prefix3` sets a list of module path prefixes,
possibly containing globs, that should not be proxied.
For example:
GONOPROXY=*.corp.google.com,rsc.io/private
will bypass the proxy for the modules foo.corp.google.com, foo.corp.google.com/bar, rsc.io/private, and rsc.io/private/bar,
though not rsc.io/privateer (the patterns are path prefixes, not string prefixes).
- `GOSUMDB=<sumdb-key>` sets the Go checksum database to use,
where `<sumdb-key>` is a verifier key as defined in
[package note](https://godoc.org/golang.org/x/mod/sumdb/note#hdr-Verifying_Notes).
- `GONOSUMDB=prefix1,prefix2,prefix3` sets a list of module path prefixes,
again possibly containing globs, that should not be looked up using the database.
We expect that corporate environments may fetch all modules, public and private,
through an internal proxy;
`GONOSUMDB` allows them to disable checksum database lookups for
internal modules while still verifying public modules.
Therefore, `GONOSUMDB` must not imply `GONOPROXY`.
We also expect that other users may prefer to connect directly to source origins
but still want verification of open source modules or proxying of the database itself;
`GONOPROXY` allows them to arrange that and therefore must not imply `GONOSUMDB`.
The database not being able to report `go.sum` lines for a module version
is a hard failure:
any private modules must be explicitly listed in `$GONOSUMDB`.
(Otherwise an attacker could block traffic to the database
and make all module versions appear to be genuine.)
The database can be disabled entirely with `GONOSUMDB=*`.
The command `go get -insecure` will report but not stop after database lookup
failures or database mismatches.
## Rationale
The motivation for authenticating module downloads is
covered in the background section above.
Note that we want to authenticate modules
obtained both from direct connections to code-hosting servers
and from module proxies.
Two topics are worth further discussion:
first, having a single database server for the entire Go ecosystem,
and second, the privacy implications of a database server.
### Security
The Go team at Google will run the Go checksum database as a service to the Go ecosystem,
similar to running `godoc.org` and `golang.org`.
It is important that the service be secure.
Our thinking about the security design of the database has evolved over time,
and it is useful to outline the evolution that led to the
current design.
The simplest possible approach, which we never seriously considered,
is to have one trusted server that issues a signed certificate for each
module version.
The drawback of this approach is that a compromised server
can be used to sign a certificate for a compromised module version,
and then that compromised module version and certificate
can be served to a target victim without easy detection.
One way to address this weakness is strength in numbers:
have, say, N=3 or N=5 organizations run independent servers,
gather certificates from all of them, and accept a module version
as valid when, say, (N+1)/2 certificates agree.
The two drawbacks of this approach are that it is significantly more expensive
and still provides no detection of actual attacks.
The payoff from targeted replacement of source code
could be high enough to justify silently compromising (N+1)/2
notaries and then making very selective use of the certificates.
So our focus turned to detection of compromise.
Requiring a checksum database to log a `go.sum` entry in a
[transparent log](https://research.swtch.com/tlog)
before accepting it does raise the likelihood of detection.
If the compromised `go.sum` entry is stored in the
actual log, an auditor can find it.
And if the compromised `go.sum` entry is served in
a forked, victim-specific log, the server must always serve
that forked log to the victim, and only to the victim,
or else the `go` command's consistency checks will fail
loudly, and with enough information to cryptographically
prove the compromise of the server.
An ecosystem with multiple proxies run by different organizations
makes a successful “forked log” attack even harder:
the attacker would have to not only compromise the database,
it would also have to compromise each possible proxy the
victim might use and arrange to identify the victim well enough
to always serve the forked log to the victim
and to never serve it to any non-victim.
The serving of the transparent log in tile form helps
caching and proxying but also makes victim identification
that much harder.
When using Certificate Transparency's proof endpoints,
the proof requests might be arranged to carry enough
material to identify a victim, for example by only ever serving an
even log sizes to the victim and odd log sizes to others
and then adjusting the log-size-specific proofs accordingly.
But complete tile fetches expose no information about the cached log size,
making it that much harder to serve modified tiles only to the victim.
We hope that proxies run by various
organizations in the Go community will also serve as auditors
and double-check Go checksum database log entries
as part of their ordinary operation.
(Another useful
service that could be enabled by
the database is a notification service to alert
authors about new versions of their own modules.)
As described earlier,
users who want to ensure their own compromise requires
compromising multiple organizations can use Google's checksum database
and a different organization's proxy to access it.
Generalizing that approach,
the usual way to further improve detection of fork attacks is to add gossip,
so that different users can check whether they are seeing
different logs.
In effect, the proxy protocol already supports this,
so that any available proxy that proxies the database
can be a gossip source.
If we add a `go fetch-latest-chccksum-log-from-goproxy` (obviously not the final name)
and
GOPROXY=https://other.proxy/ go fetch-latest-checksum-log-from-goproxy
succeeds, then the client and other.proxy are seeing the same log.
Compared to the original scenario of a single checksum database with
no transparent log, the use of a single transparent log
and the ability to proxy the database and gossip improves
detection of attacks so much that there is little incremental
security benefit to adding the complexity of multiple notaries.
At some point in the future, it might make sense for the
Go ecosystem to support using multiple databases,
but to begin with we have opted for the simpler
(but still reasonably secure) ecosystem design
of a single database.
### Privacy
Contacting the Go checksum database to authenticate a new dependency
requires sending the module path and version to the database server.
The database server will of course need to publish a privacy policy,
and it should be written as clearly as
the [Google Public DNS Privacy Policy](https://developers.google.com/speed/public-dns/privacy)
and be sure to include information about log retention windows.
That policy is still under development.
But the privacy policy only matters for data the database receives.
The design of the database protocol and usage is meant to minimize
what the `go` command even sends.
There are two main privacy concerns:
exposing the text of private modules paths to the database,
and exposing usage information for public modules to the databas.
#### Private Module Paths
The first main privacy concern is that a misconfigured `go` command
could send the text of a private module path
(for example, `secret-machine.rsc.io/private/secret-plan`) to the database.
The database will try to resolve the module, triggering a DNS lookup
for `secret-machine.rsc.io` and, if that resolves, an HTTPS fetch
for the longer URL.
Even if the database then discards that path immediately upon failure,
it has still been sent over the network.
Such misconfiguration must not go unnoticed.
For this reason (and also to avoid downgrade attacks),
if the database cannot return information about a module,
the download fails loudly and the `go` command stops.
This ensures both that all public modules are in fact
authenticated and also that any misconfiguration
must be corrected (by setting `$GONOSUMDB` to avoid
the database for those private modules)
in order to achieve a successful build.
This way, the frequency of misconfiguration-induced
database lookups should be minimized.
Misconfigurations fail; they will be noticed and fixed.
One possibility to further reduce exposure of private module path text
is to provide additional ways to
set `$GONOSUMDB`, although it is not clear what those
should be.
A top-level module's source code repository is an attractive place to
want to store configuration such as `$GONOSUMDB`
and `$GOPROXY`, but then that configuration changes
depending on which version of the repo is checked out,
which would cause interesting behavior when testing old
versions, whether by hand or using tools like `git bisect`.
(The nice thing about environment variables is that most
corporate computer management systems already provide
ways to preset environment variables.)
#### Private Module SHA256s
Another possibility to reduce exposure is to support and
use by default an alternate lookup `/lookup/SHA256(module)@version`,
which sends the SHA256 hash of the module path instead of the
module path instead.
If the database was already aware of that module path,
it would recognize the SHA256 and perform the lookup,
even potentially fetching a new version of the module.
If a misconfigured `go` command sends the SHA256 of
a private module path, that is far less information.
The SHA256 scheme does require, however, that the first use of a
public module be accompanied by some operation that sends
its module path text to the database, so that the database
can update its inverse-SHA256 index.
That operation—for now, let's call it `go notify <modulepath>`—would
need to be run just once ever across the whole Go ecosystem
for each module path.
Most likely the author would do it, perhaps as part of the
still-hypothetical `go release` command,
or else the first user of the module would need to do it
(perhaps thinking carefully about being the first-ever user of the module!).
A modification of the SHA256 scheme might be to send a truncated hash,
designed to produce [K-anonymity](https://en.wikipedia.org/wiki/K-anonymity),
but this would cause significant expense:
if the database identified K public modules with the truncated hash,
it would have to look up the given version tag for all K of them
before returning an answer. This seems needlessly expensive
and of little practical benefit.
(An attacker might even create a long list of module paths
that collide with a popular module, just to slow down requests.)
The SHA256 + `go notify` scheme is not part of this proposal today,
but we are considering adding it,
with full hashes, not truncated ones.
#### Public Module Usage Information
The second main privacy concern is that even developers who use only
public modules would expose information about their module usage habits
by requesting new `go.sum` lines from the database.
Remember that the `go` command only contacts the database
in order to find new lines to add to `go.sum`.
When `go.sum` is up-to-date, as it is during ordinary development,
the database is never contacted.
That is, the database is only involved at all when adding a new dependency
or changing the version of an existing one.
That significantly reduces the amount of usage information
being sent to the database in the first place.
Note also that even `go get -u` does not request information
about every dependency from the database:
it only requests information about dependencies with
updates available.
The `go` command will also cache database lookup results
(reauthenticating them against cached tiles at each use),
so that using a single computer to
upgrade the version of a particular dependency used by N different modules
will result in only one database lookup, not N.
That further reduces the strength of any usage signal.
One possible way to even further reduce the usage signal
observable by the database might be to use a truncated hash
for K-anonymity, as described in the previous section,
but the efficiency problems described earlier still apply.
Also, even if any particular fetch downloaded information
for K different module paths, the likely-very-lopsided popularity
distribution might make it easy to guess which module
path a typical client was really looking for,
especially combined with version information.
Truncated hashes appear to cost more than the benefit
they would bring.
The complete solution for not exposing either
private module path text or public module usage information
is to us a proxy or a bulk download.
#### Privacy by Proxy
A complete solution for database privacy concerns is to for
developers to access the database only through a proxy,
such as a local Athens instance or JFrog Artifactory instance,
assuming those proxies add support for proxying and
caching the Go database service endpoints.
The proxy can be configured with a list of private module patterns,
so that even requests from a misconfigured `go` command never
not make it past the proxy.
The database endpoints are designed for cacheability,
so that a proxy can avoid making any request more than once.
Requests for new versions of modules would still need to be
relayed to the database.
We anticipate that there will be many proxies available
for use in the Go ecosystem.
Part of the motivation for the Go checksum database is to allow
the use of any available proxy to download modules,
without any reduction in security.
Developers can then use any proxy they are comfortable using,
or run their own.
#### Privacy by Bulk Download
What little usage signal leaks from a proxy that aggressively caches
database queries can be removed entirely by instead downloading
the entire checksum database and answering requests using the
local copy.
We estimate that the Go ecosystem has around 3 million module versions.
At an estimated footprint of 200 bytes per module version,
a much larger, complete checksum database of even 100 million module versions would still only be 20 GB.
Bandwidth can be exchanged for complete anonymity
by downloading the full database once and thereafter updating it incrementally
(easy, since it is append-only).
Any queries can be answered using only the local copy,
ensuring that neither private module paths nor
public module usage is exposed.
The cost of this approach is the need for a clients to download the entire database
despite only needing an ever-smaller fraction of it.
(Today, assuming only a 3-million-entry database,
a module with even 100 dependencies would be downloading
30,000 times more database than it actually needs.
As the Go ecosystem grows, so too does the overhead factor.)
Downloading the entire database might be a good strategy
for a corporate proxy, however.
#### Privacy in CI/CD Systems
A question was raised about privacy of database operations especially
in CI/CD systems.
We expect that a CI/CD system would _never_ contact the database.
First, in typical usage, you only push code to a CI/CD system after
first at least building (and hopefully also testing!) any changes locally.
Building any changes locally will update `go.mod` and `go.sum`
as needed, and then the `go.sum` pushed to the CI/CD system
will be up-to-date. The database is only involved when adding to `go.sum`.
Second, module-aware CI/CD systems should already be using `-mod=readonly`,
to fail on out-of-date `go.mod` files instead of silently updating them.
We will ensure that `-mod=readonly` also fails on out-of-date `go.sum` files
if it does not already ([#30667](https://golang.org/issue/30667)).
## Compatibility
The introduction of the checksum database does not have any compatibility
concerns at the command or language level.
However, proxies that serve modified copies of public modules
will be incompatible with the new checks and stop being usable.
This is by design: such proxies are indistinguishable from man-in-the-middle attacks.
## Implementation
The Go team at Google is working on a production implementation
of both a Go module proxy and the Go checksum database,
as we described in the blog post “[Go Modules in 2019](https://blog.golang.org/modules2019).”
We will publish a checksum database client as part of the `go` command,
as well as an example database implementation.
We intend to ship support for the checksum database, enabled by default, in Go 1.13.
Russ Cox will lead the `go` command integration
and has posted a [stack of changes in golang.org/x/exp/notary](https://go-review.googlesource.com/q/f:notary).
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/4899-testing-helper.md | # Proposal: testing: better support test helper functions with TB.Helper
Author: Caleb Spare, based on previous work by Josh Bleecher Snyder
Last updated: 2016-12-27
Discussion at https://golang.org/issue/4899.
## Abstract
This proposal is about fixing the long-standing issue
[#4899](https://golang.org/issue/4899).
When a test calls a helper function that invokes, for instance,
`"*testing.T".Error`, the line number that is printed for the test failure
indicates the `Error` call site as inside the helper method.
This is almost always unhelpful for pinpointing the actual failure.
We propose to add a new `testing.TB` method, `Helper`,
which marks the calling function as a test helper.
When logging test messages, package testing ignores frames
that are inside marked helper functions.
It prints the first stack position inside a non-helper function.
## Background
In Go tests, it is common to use a helper function to perform some repeated
non-trivial check.
These are often of the form
func helper(t *testing.T, other, args)
though other variants exist.
Such helper functions may be local to the test package or may come from external packages.
There are many examples of such helper functions in the standard library tests.
Some are listed below.
When a helper function calls `t.Error`, `t.Fatal`, or a related method,
the error message includes file:lineno output that indicates the location of the failure.
The failure location is currently considered inside the helper method, which is unhelpful.
The misplaced failure location also inhibits useful IDE features like
automatically jumping to the failure position.
There are a variety of workarounds to which people have resorted.
### 1. Ignore the problem, making it harder to debug test failures
This is a common approach.
If the helper is only called once from the `Test*` function,
then the problem is less severe:
the test failure prints the name of the `Test*` function that failed,
and by locating the only call to the helper within that function,
the user knows the failure site.
This is just an annoyance.
When the helper is called more than once, it can be impossible to locate the
source of the failure without further debugging.
A few examples of this pattern in the standard library:
- cmd/cover: `func run(c *exec.Cmd, t *testing.T)`
- cmd/go: the methods of `testgo`
- compress/flate: `writeToType(t *testing.T, ttype string, bw *huffmanBitWriter, tok []token, input []byte)`
- crypto/aes: `func mustPanic(t *testing.T, msg string, f func())`
- database/sql: `func numPrepares(t *testing.T, db *DB) int`
- encoding/json: `func diff(t *testing.T, a, b []byte)`
- fmt: `func presentInMap(s string, a []string, t *testing.T)`, `func check(t *testing.T, got, want string)`
- html/template: the methods of `testCase`
- image/gif: `func try(t *testing.T, b []byte, want string)`
- net/http: `func checker(t *testing.T) func(string, error)`
- os: `func touch(t *testing.T, name string)`
- reflect: `func assert(t *testing.T, s, want string)`
- sync/atomic: `func shouldPanic(t *testing.T, name string, f func())`
- text/scanner: `func checkPos(t *testing.T, got, want Position)` and some of its callers
### 2. Pass around more context to be printed as part of the error message
This approach adds enough information to the failure message to pinpoint the
source of failure, at the cost of greater burden on the test writer.
The result still isn't entirely satisfactory for the test invoker:
if the user only looks at the file:lineno in the failure message,
they are still led astray until they examine the full message.
Some standard library examples:
- bytes: `func check(t *testing.T, testname string, buf *Buffer, s string)`
- context: `func testDeadline(c Context, name string, failAfter time.Duration, t testingT)`
- debug/gosym: `func testDeadline(c Context, name string, failAfter time.Duration, t testingT)`
- mime/multipart: `func expectEq(t *testing.T, expected, actual, what string)`
- strings: `func equal(m string, s1, s2 string, t *testing.T) bool`
- text/scanner: `func checkTok(t *testing.T, s *Scanner, line int, got, want rune, text string)`
### 3. Use the \r workaround
This technique is used by test helper packages in the wild.
The idea is to print a carriage return from inside the test helper
in order to hide the file:lineno printed by the testing package.
Then the helper can print its own file:lineno and message.
One example is
[github.com/stretchr/testify](https://github.com/stretchr/testify/blob/2402e8e7a02fc811447d11f881aa9746cdc57983/assert/assertions.go#L226).
## Proposal
We propose to add two methods in package testing:
// Helper marks the current function as a test helper function.
// When printing file and line information
// from methods such as t.Error, t.Fatal, and t.Log,
// the current function will be skipped.
func (t *T) Helper()
// same doc comment
func (b *B) Helper()
When package testing prints file:lineno, it walks up the stack,
skipping helper functions, and chooses the first entry in a non-helper function.
We also propose to add `Helper()` to the `testing.TB` interface.
## Rationale
### Alternative 1: allow the user to specify how many stack frames to skip
An alternative fix is to give the user control over the number of stack frames to skip.
This is similar to what package log already provides:
func Output(calldepth int, s string) error
func (l *Logger) Output(calldepth int, s string) error
For instance, in https://golang.org/cl/12405043 @robpike writes
> // Up returns a *T object whose error reports identify the line n callers
> // up the frame.
> func (t *T) Up(n) *t { .... }
>
> Then you could write
>
> t.Up(1).Error("this would be tagged with the caller's line number")
@bradfitz mentions similar APIs in [#4899](https://golang.org/issue/4899) and
[#14128](https://golang.org/issue/14128).
`Helper` is easier to use, because the user doesn't have to think about stack frames,
and it does not break when refactoring.
Also, it is not always easy to decide how many frames to skip.
A helper may be called through multiple paths, so it may be a variable depth
from the desired logging site.
For example, in the cmd/go tests, the `"*testgoData".must` helper is called directly
by some tests, but is also called by other helpers such as `"*testgoData".cd`.
Manual stack control would require the user to pass state into this method
in order to know whether to skip one or two frames.
Using the `Helper` API, the user would simply mark both `must` and `cd` as helpers.
### Alternative 2: use a special Logf/Errorf/Fatalf sentinel
Another approach given by @bradfitz in
[#14128](https://github.com/golang/go/issues/14128#issuecomment-176254702)
is to provide a magic format value:
t.Logf("some value = %v", val, testing.NoDecorate)
This seems roughly equivalent in power to our proposal, but it has downsides:
* It breaks usual `printf` conventions (@adg [points out that vet would have to
be aware of it](https://github.com/golang/go/issues/14128#issuecomment-176456878)).
* The mechanism is unusual -- it lacks precedent in the standard library.
* `NoDecorate` is less obvious in godoc than a TB method.
* Every `testing.T` method in a helper method must be decorated.
* It is not clear how to handle nested helpers other than by manually specifying
the number of stack frames to skip, inheriting the problems of alternative 1.
## Compatibility
Adding a method to `*testing.T` and `*testing.B` raises no compatibility issues.
We will also add the method to the `testing.TB` interface.
Normally changing interface method sets is verboten,
but in this case it is be fine because `TB` has a private method
specifically to prevent other implementations:
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate Go 1 compatibility.
private()
## Implementation
@cespare will send a CL implementing `"testing.TB".Helper` based on
@josharian's previous work in https://golang.org/cl/79890043.
The CL will be sent before April 30, 2017 in order to make the 1.9 release cycle.
## Open issues
This change directly solves [#4899](https://golang.org/issue/4899).
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go13compiler.md | # Go 1.3+ Compiler Overhaul
Russ Cox \
December 2013 \
golang.org/s/go13compiler
## Abstract
The Go compiler today is written in C. It is time to move to Go.
[**Update, 2014**: This work was completed and presented at GopherCon. See “[Go from C to Go](https://www.youtube.com/watch?v=QIE5nV5fDwA)”.]
[**Update, 2023.** This plan was originally published as a Google document. For easier access, it was converted to Markdown in this repository in 2023. Later work has overhauled the compiler further in a number of ways, validating the conjectures about the benefits of converting to Go. This document has only minor historical value now.]
Background
The “gc” Go toolchain is derived from the Plan 9 compiler toolchain. The assemblers, C compilers, and linkers are adopted essentially unchanged, and the Go compilers (in cmd/gc, cmd/5g, cmd/6g, and cmd/8g) are new C programs that fit into the toolchain.
Writing the compiler in C had some important advantages over using Go at the start of the project, most prominent among them the fact that, at first, Go did not exist and so could not be used to write a compiler, and the fact that, once Go did exist, it often changed in significant, backwards-incompatible ways. Using C instead of Go avoided both the initial and ongoing bootstrapping problems. Today, however, Go does exist, and its definition is stable as of Go 1, so the problems of bootstrapping are greatly reduced.
As the bootstrapping problems have receded, other engineering concerns have arisen that make Go much more attractive than C for the compiler implementation. The concerns include:
- It is easier to write correct Go code than to write correct C code.
- It is easier to debug incorrect Go code than to debug incorrect C code.
- Work on a Go compiler necessarily requires a good understanding of Go. Implementing the compiler in C adds an unnecessary second requirement.
- Go makes parallel execution trivial compared to C.
- Go has better standard support than C for modularity, for automated rewriting, for unit testing, and for profiling.
- Go is much more fun to use than C.
For all these reasons, we believe it is time to switch to Go compilers written in Go.
## Proposed Plan
We plan to translate the existing compilers from C to Go by writing and then applying an automatic translator. The conversion will proceed in phases, starting in Go 1.3 but continuing into future releases.
_Phase 1_. Develop and debug the translator. This can be done in parallel with ordinary development. In particular, it is fine for people to continue making changes to the C version of the compiler during this phase. The translator is a fair amount of work, but we are confident that we can build one that works for the specific case of translating the compilers. There are many corners of C that have no direct translation into Go; macros, unions, and bit fields are probably highest on the list. Fortunately (but not coincidentally), those features are rarely used, if at all, in the code being translated. Pointer arithmetic and arrays are also some work to translate, but even those are rare in the compiler, which primarily operates on trees and linked lists. The translator will preserve the comments and structure of the original C code, so the translation should be as readable as the current compiler.
_Phase 2_. Use the translator to convert the compilers from C to Go and delete the C copies. At this point we have transitioned to Go and still have a working compiler, but the compiler is still very much a C program. This may happen for Go 1.3, but that’s pretty aggressive. It is more likely to happen for Go 1.4.
_Phase 3_. Use some tools, perhaps derived from gofix and the Go oracle to split the compiler into packages, cleaning up and documenting the code, and adding unit tests as appropriate. This phase turns the compiler into an idiomatic Go program. This is targeted for Go 1.4.
_Phase 4a_. Apply standard profiling and measurement techniques to understand and optimize the memory and CPU usage of the compiler. This may include introducing parallelization; if so, the race detector is likely to be a significant help. This is targeted for Go 1.4, but parts may slip to Go 1.5. Some basic profiling and optimization may be done earlier, in Phase 3.
_Phase 4b_. (Concurrent with Phase 4a.) With the compiler split into packages with clearly defined boundaries, it should be straightforward to introduce a new middle representation between the architecture-independent unordered tree (Node*s) and the architecture-dependent ordered list (Prog*s) used today. That representation, which should be architecture-independent but contain information about precise order of execution, can be used to introduce order-dependent but architecture-independent optimizations like elimination of redundant nil checks and bounds checks. It may be based on SSA and if so would certainly take advantage of the lessons learned from Alan Donovan’s go.tools/ssa package.
_Phase 5_. Replace the front end with the latest (perhaps new) versions of go/parser and go/types. Robert Griesemer has discussed the possibility of designing new go/parser and go/types APIs at some point, based on experience with the current ones (and under new names, to preserve Go 1 compatibility). The work of connecting them to a compiler back end may help guide design of new APIs.
## Bootstrapping
With a Go compiler written in Go, there must be a plan for bootstrapping from scratch. The rule we plan to adopt is that the Go 1.3 compiler must compile using Go 1.2, Go 1.4 must compile using Go 1.3, and so on. Then there is a clear path to generating current binaries: build the Go 1.2 toolchain (written in C), use it to build the Go 1.3 toolchain, and so on. There will be a shell script to do this; it will take CPU time but not human time. The bootstrapping only needs to be done once per machine; the Go 1.x binaries can be kept in a known location and reused each time all.bash is run during the development of Go 1.(x+1).
Obviously, this bootstrapping path scales poorly over time. Before too many releases have gone by, it may make sense to write a back end for the compiler that generates C code. The code need not be efficient or readable, just correct. That C version would be checked in, just as today we check in the y.tab.c file generated by yacc. The bootstrap sequence would invoke gcc on that C code to build a bootstrap compiler, and the bootstrap compiler would be used to build the real compiler. Like in the other scheme, the bootstrap compiler binary can be kept in a known location and reused (not rebuilt) each time all.bash is run.
## Alternatives
There are a few alternatives that would be obvious approaches to consider, and so it is worth explaining why we have decided against them.
_Write new compilers from scratch_. The current compilers do have one very important property: they compile Go correctly (or at least correctly enough for nearly all current users). Despite Go’s simplicity, there are many subtle cases in the optimizations and other rewrites performed by the compilers, and it would be foolish to throw away the 10 or so man-years of effort that have gone into them.
_Translate the compiler manually_. We have translated other, smaller C and C++ programs to Go manually. The process is tedious and therefore error-prone, and the mistakes can be very subtle and difficult to find. A mechanical translator will instead generate translations with consistent classes of errors, which should be easier to find, and it will not zone out during the tedious parts. The Go compilers are also significantly larger than anything we’ve converted: over 60,000 lines of C. Mechanical help will make the job much easier. As Dick Sites wrote in 1974, “I would rather write programs to help me write programs than write programs.” Translating the compiler mechanically also makes it easier for development on the C originals to proceed unhindered until we are ready for the switch.
_Translate just the back ends and connect to go/parser and go/types immediately_. The data structures in the compiler that convey information from the front end to the back ends look nothing like the APIs presented by go/parser and go/types. Replacing the front end by those libraries would require writing code to convert from the go/parser and go/types data structures into the ones expected by the back ends, a very broad and error-prone undertaking. We do believe that it makes sense to use these packages, but it also makes sense to wait until the compiler is structured more like a Go program, into documented sub-packages of its own with defined boundaries and unit tests.
_Discard the current compilers and use gccgo (or go/parser + go/types + LLVM, or …)_. The current compilers are a large part of Go’s flexibility. Tying development of Go to a comparatively larger code base like GCC or LLVM seems likely to hurt that flexibility. Also, GCC is a large C (now partly C++) program and LLVM a large C++ program. All the reasons listed above justifying a move away from the current compiler code apply as much or more to these code bases.
## Long Term Use of C
Carried to completion, this plan still leaves the rest of the Plan 9 toolchain written in C. In the long term it would be nice to eliminate all C from the tree. This section speculates on how that might happen. It is not guaranteed to happen in this way or at all.
_Package runtime_. Most of the runtime is written in C, for many of the same reasons that the Go compiler is written in C. However, the runtime is much smaller than the compilers and it is already written in a mix of Go and C. It is plausible to convert the C to Go one piece at a time. The major pieces are the scheduler, the garbage collector, the hash map implementation, and the channel implementation. (The fine mixing of Go and C is possible here because the C is compiled with 6c, not gcc.)
_C compilers_. The Plan 9 C compilers are themselves written in C. If we remove all the C from Go package implementations (in particular, package runtime), we can remove these compilers: “go tool 6c” and so on would be no more, and .c files in Go package directory sources would no longer be supported. We would need to announce these plans early, so that external packages written partly in C have time to remove their uses. (Cgo, which uses gcc instead of 6c, would remain as a way to write parts of a package in C.) The Go 1 compatibility document excludes changes to the toolchain; deleting the C compilers is permitted.
_Assemblers_. The Plan 9 assemblers are also written in C. However, the assembler is little more than a simple parser coupled with a serialization of the parse tree. That could easily be translated to Go, either automatically or by hand.
_Linkers_. The Plan 9 linkers are also written in C. Recent work has moved most of the linker in into the compilers, and there is already a plan to rewrite what is left as a new, much simpler Go program. The part of the linker that has moved into the Go compiler will now need to be translated along with the rest of the compiler.
_Libmach-based tools: nm, pack, addr2line, and objdump_. Nm has already been rewritten in Go. Pack and addr2line can be rewritten any day. Objdump currently depends on libmach’s disassemblers, but those should be straightforward to convert to go, whether mechanically or manually, and at that point libmach itself can be deleted.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/13432-mobile-audio.md | # Proposal: Audio for Mobile
Author: Jaana Burcu Dogan
With input from David Crawshaw, Hyang-Ah Kim and Andrew Gerrand.
Last updated: November 30, 2015
Discussion at https://golang.org/issue/13432.
## Abstract
This proposal suggests core abstractions to support audio decoding
and playback on mobile devices.
## Background
In the scope of the Go mobile project, an audio package that supports
decoding and playback is a top priority. The current status of audio
support under x/mobile is limited to OpenAL bindings and an experimental
high-level audio player that is backed by OpenAL.
The experimental audio package fails to
- provide high level abstractions to represents audio and audio processors,
- implement a memory-efficient playback model,
- implement decoders (e.g. an mp3 decoder),
- support live streaming or other networking audio sources.
In order to address these concerns, I am proposing core abstractions and
a minimal set of features based on the proposed abstractions to provide
decoding and playback support.
## Proposal
I (Burcu Dogan) surveyed the top iOS and Android apps for audio features.
Three major categories with majorly different requirements have revealed
as a result of the survey. A good audio package shouldn't address the
different class of requirements with isolated audio APIs, but must introduce
common concepts and types that could be the backbone of both high- and low-
level audio packages. This is how we will enable users to expand their audio
capabilities by partially delegating their work to lower-level layers of the
audio package without having to rewrite their entire audio stack.
### Features considered
This section briefly explains the features required in order to support common
audio requirements of the mobile applications. The abstractions we introduce
today should be extendable to meet a majority of the features listed below in
the long run.
#### Playback
Single or multi-channel playback with player controls such as play, pause,
stop, etc. Games use a looping sample as the background music -- looping
functionality is also essential. Multiple playback instances are needed. Most
games require a background audio track and one-shot audio effects on the
foreground.
#### Decoding
Codec library and decoding support. Most radio-like apps and music players
need to play a variety of audio sources. Codec support in the parity of
AudioUnit on iOS and OpenMAX on Android is good to have.
#### Remote streaming
Audio players, radios and tools that streams audio need to be able to work
with remote audio sources. HTTP Live Streaming works on both platforms but
used to be inefficient on Android devices.
#### Synchronization and composition
- Synchronization between channels/players
- APIs that allow developers to schedule the playback, frame-level timers
- Mixers, multiple channels need to be multiplexed into a single device buffer
- Music software apps that require audio composition and filtering features
#### Playlist features
Music players and radios require playlisting features, so the users can queue,
unqueue tracks on the player. Player also need shuffling and repeating
features.
More information on the classification of the audio apps based on the features
listed above is available at Appendix: Audio Apps Classification.
### Goals
#### Short-term goals
- Playback of generated data (such as a PCM sine wave).
- Playback of an audio asset.
- Playback from streaming network sources.
- Core interfaces to represent decoders.
- Initial decoder implementations, ideally delegating the decoding to the
- system codecs (OpenMax for Android and AudioUnit for iOS).
- Basic play functions such as play (looping and one-shot), stop, pause,
gain control.
- Prefetching before user invokes playback.
#### Longer-term goals
- Multi channel playback (Playing multiple streams at the same time.)
- Multi channel synchronization and an internal clock
- Composition and filtering (mixing of multiple signals, low-pass filter,
reverb, etc)
- Tracklisting features to queue, unqueue multiple sources to a player;
playback features such as prefetching the next song
### Non-goals
- Audio capture. Recording and encoding audio is not in the roadmap initially.
Both could be added to the package without touching any API surface.
- Dependency on the visual frame rate. This feature requires the audio
scheduler to work in cooperation with the graphics layer and currently not
in our radar.
### Core abstractions
The section proposes the core interfaces and abstractions to represent audio,
audio sources and decoding primitives. The goal of introducing and agreeing on
the core abstractions is to be able to extend the audio package features in
the light of the considered features listed above without breaking the APIs.
#### Clip
The audio package will represent audio data as linear PCM formatted in-memory
audio chuncks. A fundamental interface, Clip, will define how to consume audio
data and how audio attributes (such as bit and sample rate) are reported to
the consumers of an audio media source.
Clip is is a small window into the underlying audio data.
```
// FrameInfo represents the frame-level information.
type FrameInfo struct {
// Channels represent the number of audio channels
// (e.g. 1 for mono, 2 for stereo).
Channels int
// Bit depth is the number of bits used to represent
// a single sample.
BitDepth int
// Sample rate is the number of samples to be played
// at each second.
SampleRate int64
}
// Clip represents linear PCM formatted audio.
// Clip can seek and read a small number of frames to allow users to
// consume a small section of the underlying audio data.
//
// Frames return audio frames up to a number that can fit into the buf.
// n is the total number of returned frames.
// err is io.EOF if there are no frames left to read.
//
// FrameInfo returns the basic frame information about the clip audio.
//
// Seek seeks (offset*framesize*channels) byte in the source audio data.
// Seeking to negative offsets are illegal.
// An error is returned if the offset is out of the bounds of the
// audio data source.
//
// Size returns the total number of bytes of the underlying audio data.
// TODO(jbd): Support cases where size is unknown?
type Clip interface {
Frames(buf []byte) (n int, err error)
Seek(offset int64) (error)
FrameInfo() FrameInfo
Size() int64
}
```
#### Decoders
Decoders take any arbitrary input and is responsible to output a clip.
TODO(jbd): Proposal should also mention how the decoders will be organized.
e.g. image package's support for png, jpeg, gif, etc decoders.
```
// Decoder that reads from a Reader and converts the input
// to a PCM clip output.
func Decode(r io.ReadSeeker) (Clip, error) {
panic("not implemented")
}
// A decoder that decodes the given data WAV byte slice and decodes it
// into a PCM clip output. An error is returned if any of the decoding
// steps fail. (e.g. ClipInfo cannot be determined from the WAV header.)
func DecodeWAVBytes(data []byte) (Clip, error) {
panic("not implemented")
}
```
#### Clip sources
Any arbitrary valid audio data source can be converted into a clip. Examples
of clip sources are networking streams, file assets and in-memory buffers.
```
// NewBufferClip converts a buffer to a Clip.
func NewBufferClip(buf []byte, info FrameInfo) Clip {
panic("not implemented")
}
// NewRemoteClip converts the HTTP live streaming media
// source into a Clip.
func NewRemoteClip(url string) (Clip, error) {
panic("not implemented")
}
```
#### Players
A player plays a series of clips back-to-back, provides basic control
functions (play, stop, pause, seek, etc).
Note: Currently, x/mobile/exp/audio package provides an experimental and
highly immature player. With the introduction of the new core interfaces, we
will break the API surface in order to bless the new abstractions.
```
// NewPlayer returns a new Player. It initializes the underlying
// audio devices and the related resources.
// A player can play multiple clips back-to-back. Players will begin
// prefetching the next clip to provide a smooth and uninterrupted
// playback.
func NewPlayer(c ...Clip) (*Player, error)
```
## Compatibility
No compatibility issues.
## Implementation
The current scope of the implementation will be restricted to meet the
requirements listed in the "Short-term goals" sections.
The interfaces will be contributed by Burcu Dogan. The implementation of the
decoders and playback is a team effort and requires additional planning.
The audio package has no dependencies to the next Go releases and therefore
doesn't have to fit in the Go release cycle.
## Open issues
- WAV and AIFF both support float PCM values even though the use of float
values is unpopular. Should we consider supporting float values? Float values
mean more expensive encoding and decoding. Even if float values are supported,
they must be optional -- not the primary type to represent values.
- Decoding on desktop. The package will use the system codec libraries
provided by Android and iOS on mobile devices. It is not possible to provide
feature parity for desktop envs in the scope of decoding.
- Playback on desktop. The playback may directly use AudioUnit on iOS, and
libmedia (or stagefright) on Android. The media libraries on the desktop are
highly fragmented and cross-platform libraries are third-party dependencies.
It is unlikely that we can provide an audio package that works out of the box
on desktop if we don't write an audio backend for each platform.
- Hardware acceleration. Should we allow users to bypass the decoders and
stream to the device buffer in the longer term? The scope of the audio package
is primarily mobile devices (which case-by-case supports hardware
acceleration). But if the package will cover beyond the mobile, we should
consider this case.
- Seeking on variable bit rate encoded audio data is hard without a seek table.
## Appendix: Audio Apps Classification
Classification of the audio apps are based on thet survey results mentioned
above. This section summarizes which features are highly related to each other.
### Class A
Class A mostly represents games that require to play a background sound (in
looping mode or not) and occasionally need to play one-shot audio effects fit
in this category.
- Single channel player with looping audio
- Buffering audio files entirely in memory is efficient enough, audio files
are small
- Timing of the playback doesn’t have to be precise, latency is neglectable
### Class B
Class B represents games with advanced audio. Most apps that fit in this
category are using advanced audio engines as their audio backend.
- Multi channel player
- Synchronization between channels/players
- APIs that allow developers to schedule the playback, such as frame-level
timers
- Low latency, timing of the playback needs to be precise
- Mixers, multiple channels need to be multiplexed into a single device buffer
- Music software apps require audio composition, filtering, etc
### Class C
Class C represents the media players.
- Remote streaming
- Playlisting features, multitrack playback features such as prefetching and cross fading
- High-level player controls such as looping and shuffling
- Good decoder support
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/12750-localization.md | # Proposal: Localization support in Go
Discussion at https://golang.org/issue/12750.
## Abstract
This proposal gives a big-picture overview of localization support for
Go, explaining how all pieces fit together.
It is intended as a guide to designing the individual packages and to allow
catching design issues early.
## Background
Localization can be a complex matter.
For many languages, localization is more than just translating an English format
string.
For example, a sentence may change depending on properties of the arguments such
as gender or plurality.
In turn, the rendering of the arguments may be influenced by, for example:
language, sentence context (start, middle, list item, standalone, etc.),
role within the sentence (case: dative, nominative, genitive, etc.),
formatting options, and
user-specific settings, like measurement system.
In other words, the format string is selected based on the arguments and the
arguments may be rendered differently based on the format string, or even the
position within the format string.
A localization framework should provide at least the following features:
1. mark and extract text in code to be translated,
1. injecting translated text received from a translator, and
1. formatting values, such as numbers, currencies, units, names, etc.
Language-specific parsing of values belongs in this list as well,
but we consider it to be out of scope for now.
### Localization in Go
Although we have drawn some ideas for the design from other localization
libraries, the design will inevitably be different in various aspects for Go.
Most frameworks center around the concept of a single user per machine.
This leads to concepts like default locale, per-locale loadable files, etc.
Go applications tend to be multi-user and single static libraries.
Also many frameworks predate CLDR-provided features such as varying values
based on plural and gender.
Retrofitting frameworks to use this data is hard and often results in clunky APIs.
Designing a framework from scratch allows designing with such features in mind.
### Definitions
We call a **message** the abstract notion of some semantic content to be
conveyed to the user.
Each message is identified by a key, which will often be
a fmt- or template-style format string.
A message definition defines concrete format strings for a message
called **variants**.
A single message will have at least one variant per supported language.
A message may take **arguments** to be substituted at given insertion points.
An argument may have 0 or more features.
An argument **feature** is a key-value pair derived from the value of this argument.
Features are used to select the specific variant for a message for a given
language at runtime.
A **feature value** is the value of an argument feature.
The set of possible feature values for an attribute can vary per language.
A **selector** is a user-provided string to select a variant based on a feature
or argument value.
## Proposal
Most messages in Go programs pass through either the fmt or one of the template
packages.
We treat each of these two types of packages separately.
### Package golang.org/x/text/message
Package message has drop-in replacements for most functions in the fmt package.
Replacing one of the print functions in fmt with the equivalent in package
message flags the string for extraction and causes language-specific rendering.
Consider a traditional use of fmt:
```go
fmt.Printf("%s went to %s.", person, city)
```
To localize this message, replace fmt with a message.Printer for a given language:
```go
p := message.NewPrinter(userLang)
p.Printf("%s went to %s.", person, city)
```
To localize all strings in a certain scope, the user could assign such a printer
to `fmt`.
Using the Printf of `message.Printer` has the following consequences:
* it flags the format string for translation,
* the format string is now a key used for looking up translations (the format
string is still used as a format string in case of a missing translation),
* localizable types, like numbers are rendered corresponding to p's language.
In practice translations will be automatically injected from
a translator-supplied data source.
But let’s do this manually for now.
The following adds a localized variant for Dutch:
```go
message.Set(language.Dutch, "%s went to %s.", "%s is in %s geweest.")
```
Assuming p is configured with `language.Dutch`, the Printf above will now print
the message in Dutch.
In practice, translators do not see the code and may need more context than just
the format string.
The user may add context to the message by simply commenting the Go code:
```go
p.Printf("%s went to %s.", // Describes the location a person visited.
person, // The Person going to the location.
city, // The location visited.
)
```
The message extraction tool can pick up these comments and pass them to the
translator.
The section on Features and the Rationale chapter present more details on package
message.
### Package golang.org/x/text/{template|html/template}
Templates can be localized by using the drop-in replacement packages of equal name.
They add the following functionality:
* mark to-be-localized text in templates,
* substitute variants of localized text based on the language, and
* use the localized versions of the print builtins, if applicable.
The `msg` action marks text in templates for localization analogous to the
namesake construct in Soy.
Consider code using core’s text/template:
```go
import "text/template"
import "golang.org/x/text/language"
const letter = `
Dear {{.Name}},
{{if .Attended}}
It was a pleasure to see you at the wedding.{{else}}
It is a shame you couldn't make it to the wedding.{{end}}
Best wishes,
Josie
`
// Prepare some data to insert into the template.
type Recipient struct {
Name string
Attended bool
Language language.Tag
}
var recipients = []Recipient{
{"Mildred", true, language.English},
{"Aurélie", false, language.French},
{"Rens", false, language.Dutch},
}
func main() {
// Create a new template and parse the letter into it.
t := template.Must(template.New("letter").Parse(letter))
// Execute the template for each recipient.
for _, r := range recipients {
if err := t.Execute(os.Stdout, r); err != nil {
log.Println("executing template:", err)
}
}
}
```
To localize this program the user may adopt the program as follows:
```go
import "golang.org/x/text/template"
const letter = `
{{msg "Opening of a letter"}}Dear {{.Name}},{{end}}
{{if .Attended}}
{{msg}}It was a pleasure to see you at the wedding.{{end}}{{else}}
{{msg}}It is a shame you couldn't make it to the wedding.{{end}}{{end}}
{{msg "Closing of a letter, followed by name (f)"}}Best wishes,{{end}}
Josie
`
```
and
```go
func main() {
// Create a new template and parse the letter into it.
t := template.Must(template.New("letter").Parse(letter))
// Execute the template for each recipient.
for _, r := range recipients {
if err := t.Language(r.Language).Execute(os.Stdout, r); err != nil {
log.Println("executing template:", err)
}
}
}
```
To make this work, we distinguish between normal and language-specific templates.
A normal template behaves exactly like a template in core, but may be associated
with a set of language-specific templates.
A language-specific template differs from a normal template as follows:
It is associated with exactly one normal template, which we call its base template.
1. A Lookup of an associated template will find the first non-empty result of
a Lookup on:
1. the language-specific template itself,
1. recursively, the result of Lookup on the template for the parent language
(as defined by language.Tag.Parent) associated with its base template, or
1. the base template.
1. Any template obtained from a lookup on a language-specific template will itself
be a language-specific template for the same language.
The same lookup algorithm applies for such templates.
1. The builtins print, println, and printf will respectively call the Sprint,
Sprintln, and Sprintf methods of a message.Printer for the associated language.
A top-level template called `Messages` holds all translations of messages
in language-specific templates. This allows registering of variants using
existing methods defined on templates.
```go
dutch := template.Messages.Language(language.Dutch)
template.Must(dutch.New(`Dear {{.Name}},`).Parse(`Lieve {{.Name}},`))
template.Must(dutch.
New(`It was a pleasure to see you at the wedding.`).
Parse(`Het was een genoegen om je op de bruiloft te zien.`))
// etc.
```
### Package golang.org/x/text/feature
So far we have addressed cases where messages get translated one-to-one in
different languages.
Translations are often not as simple.
Consider the message `"%[1]s went to %[2]"`, which has the arguments P (a person)
and D (a destination).
This one variant suffices for English.
In French, one needs two:
gender of P is female: "%[1]s est allée à %[2]s.", and
gender of P is male: "%[1]s est allé à %[2]s."
The number of variants needed to properly translate a message can vary
wildly per language.
For example, Arabic has six plural forms.
At worst, the number of variants for a language is equal to the Cartesian product
of all possible values for the argument features for this language.
Package feature defines a mechanism for selecting message variants based on
linguistic features of its arguments.
Both the message and template packages allow selecting variants based on features.
CLDR provides data for plural and gender features.
Likewise-named packages in the text repo provide support for each.
An argument may have multiple features.
For example, a list of persons can have both a count attribute (the number of
people in the list) as well as a gender attribute (the combined gender of the
group of people in the list, the determination of which varies per language).
The feature.Select struct defines a mapping of selectors to variants.
In practice, it is created by a feature-specific, high-level wrapper.
For the above example, such a definition may look like:
```go
message.SetSelect(language.French, "%s went to %s",
gender.Select(1, // Select on gender of the first argument.
"female", "%[1]s est allée à %[2]s.",
"other", "%[1]s est allé à %[2]s."))
```
The "1" in the Select statement refers to the first argument, which was our person.
The message definition now expects the first argument to support the gender feature.
For example:
```go
type Person struct {
Name string
gender.Gender
}
person := Person{ "Joe", gender.Male }
p.Printf("%s went to %s.", person, city)
```
The plural package defines a feature type for plural forms.
An obvious consumer is the numbers package.
But any package that has any kind of amount or cardinality (e.g. lists) can use it.
An example usage:
```go
message.SetSelect(language.English, "There are %d file(s) remaining.",
plural.Select(1,
"zero", "Done!",
"one", "One file remaining",
"other", "There are %d files remaining."))
```
This works in English because the CLDR category "zero" and "one" correspond
exclusively to the values 0 and 1.
This is not the case, for example, for Serbian, where "one" is really a category
for a broad range of numbers ending in 1 but not 11.
To deal with such cases, we borrow a notation from ICU to support exact matching:
```go
message.SetSelect(language.English, "There are %d file(s) remaining.",
plural.Select(1,
"=0", "Done!",
"=1", "One file remaining",
"other", "There are %d files remaining."))
```
Besides "=", and in addition to ICU, we will also support the "<" and ">" comparators.
The template packages would add a corresponding ParseSelect to add translation variants.
### Value formatting
We now move from localizing messages to localizing values.
This is a non-exhaustive list of value type that support localized rendering:
* numbers
* currencies
* units
* lists
* dates (calendars, formatting with spell-out, intervals)
* time zones
* phone numbers
* postal addresses
Each type maps to a separate package that roughly provides the same types:
* Value: encapsulates a value and implements fmt.Formatter.
For example, currency.Value encapsulates the amount, the currency, and
whether it should be rendered as cash, accounting, etc.
* Formatter: a func of the form func(x interface{}) Value that creates or wraps
a Value to be rendered according to the Formatter's purpose.
Since a Formatter leaves the actual printing to the implementation of
fmt.Formatter, the value is not printed until after it is passed to one of the
print methods.
This allows formatting flags, as well as other context information to influence
the rendering.
The State object passed to Format needs to provide more information than
what is passed by fmt.State, namely:
* a `language.Tag`,
* locale settings that a user may override relative to the user locale setting
(e.g. preferred time format, measurement system),
* sentence context, such as standalone, start-, mid-, or end-of-sentence, and
* formatting options, possibly defined by the translator.
To accommodate this, we either need to define a text repo-specific State
implementation that Format implementations can type assert to or
define a different Formatter interface.
#### Example: Currencies
We consider this pattern applied to currencies. The Value and Formatter type:
```go
// A Formatter associates formatting information with the given value. x may be a
// Currency, a Value, or a number if the Formatter is associated with a default currency.
type Formatter func(x interface{}) Value
func (f Formatter) NumberFormat(f number.Formatter) Formatter
...
var Default Formatter = Formatter(formISO)
var Symbol Formatter = Formatter(formSymbol)
var SpellOut Formatter = Formatter(formSpellOut)
type Value interface {
amount interface{}
currency Currency
formatter *settings
}
// Format formats v. If State is a format.State, the value is formatted
// according to the given language. If State is not language-specific, it will
// use number plus ISO code for values and the ISO code for Currency.
func (v Value) Format(s fmt.State, verb rune)
func (v Value) Amount() interface{}
func (v Value) Float() (float64, error)
func (v Value) Currency() Currency
...
```
Usage examples:
```go
p := message.NewPrinter(language.AmericanEnglish)
p.Printf("You pay %s.", currency.USD.Value(3)) // You pay USD 3.
p.Printf("You pay %s.", currency.Symbol(currency.USD.Value(3))) // You pay $3.
p.Printf("You pay %s.", currency.SpellOut(currency.USD.Value(1)) // You pay 1 US Dollar.
spellout := currency.SpellOut.NumberFormat(number.SpellOut)
p.Printf("You pay %s.", spellout(currency.USD.Value(3))) // You pay three US Dollars.
```
Formatters have option methods for creating new formatters.
Under the hood all formatter implementations use the same settings type, a
pointer of which is included as a field in Value.
So option methods can access a formatter’s settings by formatting a dummy value.
Different types of currency types are available for different localized rounding
and accounting practices.
```go
v := currency.CHF.Value(3.123)
p.Printf("You pay %s.", currency.Cash.Value(v)) // You pay CHF 3.15.
spellCash := currency.SpellOut.Kind(currency.Cash).NumberFormat(number.SpellOut)
p.Printf("You pay %s.", spellCash(v)) // You pay three point fifteen Swiss Francs.
```
The API ensures unused tables are not linked in.
For example, the rather large tables for spelling out numbers and currencies
needed for number.SpellOut and currency.SpellOut are only linked in when
the respective formatters are called.
#### Example: units
Units are like currencies but have the added complexity that the amount and
unit may change per locale.
The Formatter and Value types are analogous to those of Currency.
It defines "constructors" for a selection of unit types.
```go
type Formatter func(x interface{}) Value
var (
Symbol Formatter = Formatter(formSymbol)
SpellOut Formatter = Formatter(formSpellOut)
)
// Unit sets the default unit for the formatter. This allows the formatter to
// create values directly from numbers.
func (f Formatter) Unit(u Unit) Formatter
// create formatted values:
func (f Formatter) Value(x interface{}, u Unit) Value
func (f Formatter) Meters(x interface{}) Value
func (f Formatter) KilometersPerHour(x interface{}) Value
…
type Unit int
const SpeedKilometersPerHour Unit = ...
type Kind int
const Speed Kind = ...
```
Usage examples:
```go
p := message.NewPrinter(language.AmericanEnglish)
p.Printf("%d", unit.KilometersPerHour(250)) // 155 mph
```
spelling out the unit names:
```go
p.Print(unit.SpellOut.KilometersPerHour(250)) // 155.343 miles per hour
```
Associating a default unit with a formatter allows it to format numbers directly:
```go
kmh := unit.SpellOut.Unit(unit.SpeedKilometersPerHour)
p.Print(kmh(250)) // 155.343 miles per hour
```
Spell out the number as well:
```go
spellout := unit.SpellOut.NumberFormat(number.SpellOut)
p.Print(spellout.KilometersPerHour(250))
// one hundred fifty-five point three four three miles per hour
```
or perhaps also
```go
p.Print(unit.SpellOut.KilometersPerHour(number.SpellOut(250)))
// one hundred fifty-five point three four three miles per hour
```
Using a formatter, like `number.SpellOut(250)`, just returns a Value wrapped
with the new formatting settings.
The underlying value is retained, allowing its features to select
the proper unit names.
There may be an ambiguity as to which unit to convert to when converting from
US to the metric system.
For example, feet can be converted to meters or centimeters.
Moreover, which one is to prefer may differ per language.
If this is an issue we may consider allowing overriding the default unit to
convert in a message.
For example:
%[2:unit=km]f
Such a construct would allow translators to annotate the preferred unit override.
## Details and Rationale
### Formatting
The proposed Go API deviates from a common pattern in other localization APIs by
_not_ associating a Formatter with a language.
Passing the language through State has several advantages:
1. the user needs to specify a language for a message only once, which means
1. less typing,
1. no possibility of mismatch, and
1. no need to initialize a formatter for each language (which may mean on
every usage),
1. the value is preserved up till selecting the variant, and
1. a string is not rendered until its context is known.
It prevents strings from being rendered prematurely, which, in turn, helps
picking the proper variant and allows translators to pass in options in
formatting strings.
The Formatter construct is a natural way of allowing for this flexibility and
allows for a straightforward and natural API for something that is otherwise
quite complex.
The Value types of the formatting packages conflate data with formatting.
However, formatting types often are strongly correlated to types.
Combining formatting types with values is not unlike associating the time zone
with a Time or rounding information with a number.
Combined with the fact that localized formatting is one of the main purposes
of the text repo, it seems to make sense.
#### Differences from the fmt package
Formatted printing in the message package differs from the equivalent in the
fmt package in various ways:
* An argument may be used solely for its features, or may be unused for
specific variants.
It is therefore possible to have a format string that has no
substitutions even in the presence of arguments.
* Package message dynamically selects a variant based on the
arguments’ features and the configured language.
The format string passed to a formatted print method is mostly used as a
reference or key.
* The variant selection mechanism allows for the definition of variables
(see the section on package feature).
It seems unnatural to refer to these by position.
We contemplate the usage of named arguments for such variables: `%[name]s`.
* Rendered text is always natural language and values render accordingly.
For example, `[]int{1, 2, 3}` will be rendered, in English, as `"1, 2 and 3"`,
instead of `"[1 2 3]"`.
* Formatters may use information about sentence context.
Such meta data must be derived by automated analysis or supplied by a
translator.
Considering the differences with fmt we expect package message to do its own
parsing.
Different substitution points of the same argument may require a different State
object to be passed.
Using fmt’s parser would require rewriting such arguments into different forms
and/or exposing more internals of fmt in the API.
It seems more straightforward for package message to do its own parsing.
Nonetheless, we aim to utilize as much of the fmt package as possible.
#### Currency
Currency is its own package.
In most localization APIs the currency formatter is part of the number formatter.
Currency data is large, though, and putting it in its own package
avoids linking it in unnecessarily.
Separating the currency package also allows greater control over options.
Currencies have specific locale-sensitive rounding and scale settings that
may interact poorly with options provided for a number formatter.
#### Units
We propose to have one large package that includes all unit types.
We could split this package up in, for example, packages for energy, mass,
length, speed etc.
However, there is a lot of overlap in data (e.g. kilometers and kilometers per hour).
Spreading the tables across packages will make sharing data harder.
Also, not all units belong naturally in a specific package.
To mitigate the impact of including large tables, we can have composable modules
of data from which user can compose smaller formatters
(similar to the display package).
### Features
The proposed mechanism for features takes a somewhat different approach
to OS X and ICU.
It allows mitigating the combinatorial explosion that may occur when combining
features while still being legible.
#### Matching algorithm
The matching algorithm returns the first match on a depth-first search on all cases.
We also allow for variable assignment.
We define the following types (in Go-ey pseudo code):
Select struct {
Feature string // identifier of feature type
Argument interface{} // Argument reference
Cases []Case // The variants.
}
Case struct { Selector string; Value interface{} }
Var: struct { Name string; Value interface{} }
Value: Select or String
SelectSequence: [](Select or Var)
To select a variant given a set of arguments:
1. Initialize a map m from argument name to argument value.
1. For each v in s:
1. If v is of type Var, update m[v.Name] = Eval(v.Value, m)
1. If v is of type Select, then let v be Eval(v, m).
1. If v is of type string, return v.
Eval(v, m): Value
1. If v is a string, return it.
1. Let f be the feature value for feature v.Feature of argument v.Argument.
1. For each case in v.Cases,
1. return Eval(v) if f.Match(case.Selector, f, v.Argument)
1. Return nil (no match)
Match(s, cat, arg): string x string x interface{} // Implementation for numbers.
1. If s[0] == ‘=’ return int(s[1:]) == arg.
1. If s[0] == ‘<’ return int(s[1:]) < arg.
1. If s[0] == ‘>’ return int(s[1:]) > arg.
1. If s == cat return true.
1. return s == "other"
A simple data structure encodes the entire Select procedure, which makes it
trivially machine-readable, a condition for including it in a translation pipeline.
#### Full Example
Consider the message `"%[1]s invite %[2] to their party"`, where argument 1 an 2
are lists of respectively hosts and guests, and data:
```go
map[string]interface{}{
"Hosts": []gender.String{
gender.Male.String("Andy"),
gender.Female.String("Sheila"),
},
"Guests": []string{ "Andy", "Mary", "Bob", "Linda", "Carl", "Danny" },
}
```
The following variant selector covers various cases for different values of the
arguments.
It limits the number of guests listed to 4.
```go
message.SetSelect(en, "%[1]s invite %[2]s and %[3]d other guests to their party.",
plural.Select(1, // Hosts
"=0", `There is no party. Move on!`,
"=1", plural.Select(2, // Guests
"=0", `%[1]s does not give a party.`,
"other", plural.Select(3, // Other guests count
"=0", gender.Select(1, // Hosts
"female", "%[1]s invites %[2]s to her party.",
"other ", "%[1]s invites %[2]s to his party."),
"=1", gender.Select(1, // Hosts
"female", "%[1]s invites %#[2]s and one other person to her party.",
"other ", "%[1]s invites %#[2]s and one other person to his party."),
"other", gender.Select(1, // Hosts
"female", "%[1]s invites %#[2]s and %[3]d other people to her party.",
"other ", "%[1]s invites %#[2]s and %[3]d other people to his party.")),
"other", plural.Select(2, // Guests,
"=0 ", "%[1]s do not give a party.",
"other", plural.Select(3, // Other guests count
"=0", "%[1]s invite %[2]s to their party.",
"=1", "%[1]s invite %#[2]s and one other person to their party.",
"other ", "%[1]s invite %#[2]s and %[3]d other people to their party."))))
```
<!-- ```go
template.Language(language.English).
New("{{.Hosts}} invite {{.Guests}} to their party.").
ParseSelect(plural.Select(".Hosts",
"=0", `There is no party. Move on!`,
"=1", plural.Select(".Guests",
"=0", `{{.Hosts}} does not give a party.`,
"<5", gender.Select(".Hosts",
"female", `{{.Hosts}} invites {{.Guests}} to her party.`,
"other ", `{{.Hosts}} invites {{.Guests}} to his party.`),
"=5", gender.Select(".Hosts",
"female", `{{.Hosts}} invites {{first 4 .Guests}} and one other
person to her party.`,
"other ", `{{.Hosts}} invites {{first 4 .Guests}} and one other
person to his party.`),
"other", gender.Select(".Hosts",
"female", `{{.Hosts}} invites {{first 4 .Guests}} and {{offset 4 .Guests}}
other people to her party.`,
"other ", `{{.Hosts}} invites {{first 4 .Guests}} and {{offset 4 .Guests}}
other people to his party.`),
),
"other", plural.Select(".Guests",
"=0 ", `{{.Hosts}} do not give a party.`,
"<5 ", `{{.Hosts}} invite {{.Guests}} to their party.`,
"=5 ", `{{.Hosts}} invite {{first 4 .Guests}} and one other person
to their party.`,
"other ", `{{.Hosts}} invite {{first 4 .Guests}} and
{{offset 4 .Guests}} other people to their party.`)))
``` -->
For English, we have three variables to deal with:
the plural form of the hosts and guests and the gender of the hosts.
Both guests and hosts are slices.
Slices have a plural feature (its cardinality) and gender (based on CLDR data).
We define the flag `#` as an alternate form for lists to drop the comma.
It should be clear how quickly things can blow up with when dealing with
multiple features.
There are 12 variants.
For other languages this could be quite a bit more.
Using the properties of the matching algorithm one can often mitigate this issue.
With a bit of creativity, we can remove the two cases where `Len(Guests) == 0`
and add another select block at the start of the list:
```go
message.SetSelect(en, "%[1]s invite %[2]s and %[3]d other guests to their party.",
plural.Select(2, "=0", `There is no party. Move on!`),
plural.Select(1,
"=0", `There is no party. Move on!`,
…
```
<!-- ```go
template.Language(language.English).
New("{{.Hosts}} invite {{.Guests}} to their party.").
ParseSelect(
plural.Select(".Guests", "=0", `There is no party. Move on!`),
plural.Select(".Hosts",
"=0", `There is no party. Move on!`,
…
``` -->
The algorithm will return from the first select when `len(Guests) == 0`,
so this case will not have to be considered later.
Using Var we can do a lot better, though:
```go
message.SetSelect(en, "%[1]s invite %[2]s and %[3]d other guests to their party.",
feature.Var("noParty", "There is no party. Move on!"),
plural.Select(1, "=0", "%[noParty]s"),
plural.Select(2, "=0", "%[noParty]s"),
feature.Var("their", gender.Select(1, "female", "her", "other ", "his")),
// Variables may be overwritten.
feature.Var("their", plural.Select(1, ">1", "their")),
feature.Var("invite", plural.Select(1, "=1", "invites", "other ", "invite")),
feature.Var("guests", plural.Select(3, // other guests
"=0", "%[2]s",
"=1", "%#[2]s and one other person",
"other", "%#[2]s and %[3]d other people"),
feature.String("%[1]s %[invite]s %[guests]s to %[their]s party."))
```
<!--```go
template.Language(language.English).
New("{{.Hosts}} invite {{.Guests}} to their party.").
ParseSelect(
feature.Var("noParty", "There is no party. Move on!"),
plural.Select(".Hosts", "=0", `{{$noParty}}`),
plural.Select(".Guests", "=0", `{{$noParty}}`),
feature.Var("their", gender.Select(".Hosts",
"female", "her",
"other ", "his")),
// Variables may be overwritten.
feature.Var("their", plural.Select(".Hosts", ">1", "their")),
feature.Var("invite", plural.Select(".Hosts",
"=1", "invites",
"other ", "invite")),
plural.Select(".Guests",
"<5", `{{.Hosts}} {{$invite}} {{.Guests}} to {{$their}} party.`,
"=5", `{{.Hosts}} {{$invite}} {{first 4 .Guests}} and one other person
to {{$their}} party.`,
"other", `{{.Hosts}} {{$invite}} {{first 4 .Guests | printf "%#v"}}
and {{offset 4 .Guests}} other people to {{$their}} party.`))
```-->
This is essentially the same as the example before, but with the use of
variables to reduce the verbosity.
If one always shows all guests, there would only be one variant for describing
the guests attending a party!
#### Comparison to ICU
ICU has a similar approach to dealing with gender and plurals.
The above example roughly translates to:
```
`{num_hosts, plural,
=0 {There is no party. Move on!}
other {
{gender_of_host, select,
female {
{num_guests, plural, offset:1
=0 {{host} does not give a party.}
=1 {{host} invites {guest} to her party.}
=2 {{host} invites {guest} and one other person to her party.}
other {{host} invites {guest} and # other people to her party.}}}
male {
{num_guests, plural, offset:1
=0 {{host} does not give a party.}
=1 {{host} invites {guest} to his party.}
=2 {{host} invites {guest} and one other person to his party.}
other {{host} invites {guest} and # other people to his party.}}}
other {
{num_guests, plural, offset:1
=0 {{host} do not give a party.}
=1 {{host} invite {guest} to their party.}
=2 {{host} invite {guest} and one other person to their party.}
other {{host} invite {guest} and # other people to their party.}}}}}}`
```
Comparison:
* In Go, features are associated with values, instead of passed separately.
* There is no Var construct in ICU.
* Instead the ICU notation is more flexible and allows for notations like:
```
"{1, plural,
zero {Personne ne se rendit}
one {{0} est {2, select, female {allée} other {allé}}}
other {{0} sont {2, select, female {allées} other {allés}}}} à {3}"
```
* In Go, strings can only be assigned to variables or used in leaf nodes of a
select. We find this to result in more readable definitions.
* The Go notation is fully expressed in terms of Go structs:
* There is no separate syntax to learn.
* Most of the syntax is checked at compile time.
* It is serializable and machine readable without needing another parser.
* In Go, feature types are fully generic.
* Go has no special syntax for constructs like offset (see the third argument
in ICU’s plural select and the "#" for substituting offsets).
We can solve this with pipelines in templates and special interpretation for
flag and verb types for the Format implementation of lists.
* ICU's algorithm seems to prohibit the user of ‘<’ and ‘>’ selectors.
#### Comparison to OS X
OS X recently introduced support for handling plurals and prepared for support
for gender.
The data for selecting variants is stored in the stringsdict file.
This example from the referenced link shows how to vary sentences for
"number of files selected" in English:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>%d files are selected</key>
<dict>
<key>NSStringLocalizedFormatKey</key>
<string>%#@num_files_are@ selected</string>
<key>num_files_are</key>
<dict>
<key>NSStringFormatSpecTypeKey</key>
<string>NSStringPluralRuleType</string>
<key>NSStringFormatValueTypeKey</key>
<string>d</string>
<key>zero</key>
<string>No file is</string>
<key>one</key>
<string>A file is</string>
<key>other</key>
<string>%d files are</string>
</dict>
</dict>
</dict>
</plist>
```
The equivalent in the proposed Go format:
```go
message.SetSelect(language.English, "%d files are selected",
feature.Var("numFilesAre", plural.Select(1,
"zero", "No file is",
"one", "A file is",
"other", "%d files are")),
feature.String("%[numFilesAre]s selected"))
```
A comparison between OS X and the proposed design:
* In both cases, the selection of variants can be represented in a data structure.
* OS X does not have a specific API for defining the variant selection in code.
* Both approaches allow for arbitrary feature implementations.
* OS X allows for a similar construct to Var to allow substitution of substrings.
* OS X has extended its printf-style format specifier to allow for named substitutions.
The substitution string `"%#@foo@"` will substitute the variable foo.
The equivalent in Go is the less offensive `"%[foo]v"`.
### Code organization
The typical Go deployment is that of a single statically linked binary.
Traditionally, though, most localization frameworks have grouped data in
per-language dynamically-loaded files.
We suggested some code organization methods for both use cases.
#### Example: statically linked package
In the following code, a single file called messages.go contains all collected
translations:
```go
import "golang.org/x/text/message"
func init() {
for _, e := range entries{
for _, t := range e {
message.SetSelect(e.lang, t.key, t.value)
}
}
}
type entry struct {
key string
value feature.Value
}
var entries = []struct{
lang language.Tag
entry []entry
}{
{ language.French, []entry{
{ "Hello", feature.String("Bonjour") },
{ "%s went to %s", feature.Select{ … } },
…
},
}
```
#### Example: dynamically loaded files
We suggest storing per-language data files in a messages subdirectory:
```go
func NewPrinter(t language.Tag) *message.Printer {
r, err := os.Open(filepath.Join("messages", t.String() + ".json"))
// handle error
cat := message.NewCatalog()
d := json.NewDecoder(r)
for {
var msg struct{ Key string; Value []feature.Value }
if err := d.Decode(&msg); err == io.EOF {
break
} else if err != nil {
// handle error
}
cat.SetSelect(t, msg.Key, msg.Value...)
}
return cat.NewPrinter(t)
}
```
## Compatibility
The implementation of the `msg` action will require some modification to core’s
template/parse package.
Such a change would be backward compatible.
## Implementation Plan
Implementation would start with some of the rudimentary package in the text
repo, most notably format.
Subsequently, this allows the implementation of the formatting of some specific
types, like currencies.
The messages package will be implemented first.
The template package is more invasive and will be implemented at a later stage.
Work on infrastructure for extraction messages from templates and print
statements will allow integrating the tools with translation pipelines.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/26756-rawxml-token.md | # Proposal: Raw XML Token
Author(s): Sam Whited <sam@samwhited.com>
Last updated: 2018-09-01
Discussion at https://golang.org/issue/26756
CL at https://golang.org/cl/127435
## Abstract
This proposal defines a mechanism by which users can emulate the `,innerxml`
struct tag using XML tokens.
## Background
When using the `"*Encoder".EncodeToken` API to write tokens to an XML stream,
it is currently not possible to fully emulate the behavior of `Marshal`.
Specifically, there is no functionality that lets users output XML equivalent to
the `,innerxml` struct tag which inserts raw, unescaped, XML into the output.
For example, consider the following:
e := xml.NewEncoder(os.Stdout)
e.Encode(struct {
XMLName xml.Name `xml:"raw"`
Inner string `xml:",innerxml"`
}{
Inner: `<test:test xmlns:test="urn:example:golang"/>`,
})
// Output: <raw><test:test xmlns:test="urn:example:golang"/></raw>
This cannot be done with the token based output because all token types are
currently escaped.
For example, attempting to output the raw XML as character data results in the
following:
e.EncodeToken(xml.CharData(rawOut))
e.Flush()
// <test:test xmlns:test="urn:example:golang">
## Proposal
The proposed API introduces an XML pseudo-token: `RawXML`.
```go
// RawXML represents some data that should be passed through without escaping.
// Like a struct field with the ",innerxml" tag, RawXML is written to the
// stream verbatim and is not subject to the usual escaping rules.
type RawXML []byte
// Copy creates a new copy of RawXML.
func (r RawXML) Copy() RawXML { … }
```
## Rationale
When attempting to match the output of legacy XML encoders which may produce
broken escaping, or match the output of XML encoders that support features that
are not currently supported by the [`encoding/xml`] package such as namespace
prefixes it is often desirable to use `,rawxml`.
However, if the user is primarily using the token stream API, it may not be
desirable to switch between encoding tokens and encoding native structures which
is cumbersome and forces a call to `Flush`.
Being able to generate the same output from both the SAX-like and DOM-like APIs
would also allow future proposals the option of fully unifying the two APIs by
creating an encoder equivalent to the `NewTokenDecoder` function.
## Compatibility
This proposal introduces one new exported type that would be covered by the
compatibility promise.
## Implementation
Implementation of this proposal is trivial, comprising some 5 lines of code
(excluding tests and comments).
[CL 127435] has been created to demonstrate the concept.
## Open issues
None.
[`encoding/xml`]: https://golang.org/pkg/encoding/xml/
[CL 127435]: https://golang.org/cl/127435
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/19113-signed-shift-counts.md | # Proposal: Permit Signed Integers as Shift Counts for Go 2
Robert Griesemer
Last updated: January 17, 2019
Discussion at [golang.org/issue/19113](https://golang.org/issue/19113).
## Summary
We propose to change the language spec such that the shift count
(the rhs operand in a `<<` or `>>` operation)
may be a _signed_ or unsigned (non-constant) integer,
or any non-negative constant value that can be represented as an integer.
## Background
See **Rationale** section below.
## Proposal
We change the language spec regarding shift operations as follows:
In the section on [Operators](https://golang.org/ref/spec#Operators), the text:
> The right operand in a shift expression must have unsigned integer type
> or be an untyped constant that can be converted to unsigned integer type.
to
> The right operand in a shift expression must have integer type
> or be an untyped constant that can be converted to an integer type.
> If the right operand is constant, it must not be negative.
Furthermore, in the section on [Integer operators](https://golang.org/ref/spec#Arithmetic_operators), we change the text:
> The shift operators shift the left operand by the shift count specified by the right operand.
to
> The shift operators shift the left operand by the shift count specified by the right operand.
> A run-time panic occurs if a non-constant shift count is negative.
## Rationale
Since Go's inception, shift counts had to be of unsigned integer type
(or a non-negative constant representable as an unsigned integer).
The idea behind this rule was that
(a) the spec didn't have to explain what happened for negative values,
and (b) the implementation didn't have to deal with negative values
possibly occurring at run-time.
In retrospect, this may have been a mistake;
for example see
[this comment by Russ Cox](https://github.com/golang/go/issues/18616#issuecomment-278852766)
during the development of
[`math/bits`](https://golang.org/pkg/math/bits).
It turns out that we could actually change the spec
in a backward-compatible way in this regard,
and this proposal is suggesting that we do exactly that.
There are other language features where the result (`len(x)`),
argument (`n` in `make([]T, n)`) or constant (`n` in `[n]T`)
are known to be never negative or must not be negative,
yet we return an `int` (for `len`, `cap`) or permit any integer type.
Requiring an unsigned integer type for shift counts is frequently
a non-issue because the shift count is constant (see below);
but in some cases explicit `uint` conversions are needed,
or the code around the shift is carefully crafted to use unsigned integers.
In either case, readability is slightly compromised,
and more decision making is required when crafting the code:
Should we use a conversion or type other variables as unsigned integers?
Finally, and perhaps most importantly, there may be cases
where we simply convert an integer to an unsigned integer
and in the process inadvertently make an (invalid) negative value
positive in the process, possibly hiding a bug that way
(resulting in a shift by a very large number,
leading to 0 or -1 depending on the shifted value).
If we permit any integer type, the existing code will continue to work.
Places where we currently use a `uint` conversion won't need it anymore,
and code that is crafted for an unsigned shift count
may not require unsigned integers elsewhere.
(There’s a remote chance that some code relies on the
fact that a negative value becomes a large positive value
with a uint conversion; such code would continue to need the uint conversion.
We cannot remove the uint conversions without testing.)
An investigation of shifts in the current standard library and tests
as of 2/15/2017 (excluding package-external tests) found:
- 8081 shifts total; 5457 (68%) right shifts vs 2624 (32%) left shifts
- 6151 (76%) of those are shifts by a (typed or untyped) constant
- 1666 (21%) shifts are in tests (_test.go files)
- 253 (3.1%) shifts use an explicit uint conversion for the shift count
If we only look at shifts outside of test files we have:
- 6415 shifts total; 4548 (71%) right shifts vs 1867 (29%) left shifts
- 5759 (90%) of those are shifts by a (typed or untyped) constant
- 243 (3.8%) shifts use an explicit uint conversion for the shift count
The overwhelming majority (90%) of shifts
outside of testing code is by untyped constant values,
and none of those turns out to require a conversion.
This proposal won't affect that code.
From the remaining 10% of all shifts,
38% (3.8% of the total number of shifts) require a `uint` conversion.
That's a significant number.
In the remaining 62% of non-constant shifts,
the shift count expression must be using a variable
that's of unsigned integer type, and often a conversion is required there.
A typical example is [archive/tar/strconv.go:88](https://golang.org/src/archive/tar/strconv.go#L88):
```Go
func fitsInBase256(n int, x int64) bool {
var binBits = uint(n-1) * 8 // <<<< uint cast
return n >= 9 || (x >= -1<<binBits && x < 1<<binBits)
}
```
In this case, `n` is an incoming argument,
and we can't be sure that `n > 1` without further analysis of the callers,
and thus there's a possibility that `n - 1` is negative.
The `uint` conversions hides that error silently.
Another one is [cmd/compile/internal/gc/esc.go:1460](https://golang.org/src/cmd/compile/internal/gc/esc.go#L1460):
```Go
shift := uint(bitsPerOutputInTag*(vargen-1) + EscReturnBits) // <<<< uint cast
old := (e >> shift) & bitsMaskForTag
```
Or [src/fmt/scan.go:604](https://golang.org/src/fmt/scan.go#L604):
```Go
n := uint(bitSize) // uint cast
x := (r << (64 - n)) >> (64 - n)
```
Many (most?) of the non-constant shifts
that don't use an explicit `uint` conversion in the shift expression itself
appear to have a `uint` conversion before that expression.
Most (all?) of these conversions wouldn't be necessary anymore.
The drawback of permitting signed integers
where negative values are not permitted is that we need to check
for negative values at run-time and panic as needed,
as we do elsewhere (e.g., for `make`).
This requires a bit more code; an estimated minimum of
two extra instructions per non-constant shift: a test and a branch).
However, none of the existing code will incur that cost
because all shift counts are unsigned integers at this point,
thus the compiler can omit the check.
For new code using non-constant integer shift counts,
often the compiler may be able to prove that
the operand is non-negative and then also avoid the extra instructions.
The compiler can already often prove that a value is non-negative
(done anyway for automatic bounds check elimination),
and in that case it can avoid the new branch entirely.
Of course, as a last resort,
an explicit `uint` conversion or mask in the source code
will allow programmers to force the removal of the check,
just as an explicit mask of the shift count today
avoids the oversize shift check.
On the plus side, almost all code that used a `uint` conversion
before won't need it anymore, and it will be safer for that
since possibly negative values will not be silently converted into positive ones.
## Compatibility
This is a backward-compatible language change:
Any valid program will continue to be valid,
and will continue to run exactly the same,
without any performance impact.
New programs may be using non-constant integer shift counts
as right operands in shift operations.
Except for fairly small changes to the spec,
the compiler, and go/types,
(and possibly go/vet and golint if they look at shift operations),
no other code needs to be changed.
There's a (remote) chance that some code makes intentional
use of negative shift count values converted to unsigned:
```Go
var shift int = <some expression> // use negative value to indicate that we want a 0 result
result := x << uint(shift)
```
Here, `uint(shift)` will produce a very large positive
value if `shift` is negative, resulting in `x << uint(shift)` becoming 0.
Because such code required an explicit conversion
and will continue to have an explicit conversion, it will continue to work.
Programmers removing uint conversions from their code will
need to keep this in mind. Most of the time, however, a panic
resulting from removing the conversion will indicate a bug.
## Implementation
The implementation requires:
- Adjusting the compiler’s type-checker to allow signed integer shift counts
- Adjusting the compiler’s back-end to generate the extra test
- Possibly some (minimal) runtime work to support the new runtime panic
- Adjusting go/types to allow signed integer shift counts
- Adjusting the Go spec as outlined earlier in this proposal
- Adjusting gccgo accordingly (type-checker and back-end)
- Testing the new changes by adding new tests
No library changes will be needed as this is a 100% backward-compatible change.
Robert Griesemer and Keith Randall plan to split the work
and aim to have all the changes ready at the start of the Go 1.13 cycle,
around February 1. Ian Lance Taylor will look into the gccgo changes.
As noted in our
[“Go 2, here we come!” blog post](https://blog.golang.org/go2-here-we-come),
the development cycle will serve as a way to collect experience about
these new features and feedback from (very) early adopters.
At the release freeze, May 1, we will revisit the proposed features
and decide whether to include them in Go 1.13.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/16410-heap-viewer.md | # Proposal: Go Heap Dump Viewer
Author(s): Michael Matloob
Last updated: 20 July 2016
Discussion at https://golang.org/issue/16410
## Abstract
This proposal is for a heap dump viewer for Go programs. This proposal will provide a
web-based, graphical viewer as well as packages for analyzing and understanding heap
dumps.
## Background
Sometimes Go programs use too much memory and the programmer wants to know why. Profiling
gives the programmer statistical information about rates of allocation, but doesn't gives
a specific concrete snapshot that can explain why a variable is live or how many
instances of a given type are live.
There currently exists a tool written by Keith Randall
that takes heap dumps produced by `runtime/debug.WriteHeapDump` and converts them into
the hprof format which can be understood by those Java heap analysis tools, but there
are some issues with the tool in its current state. First, the tool is
out of sync with the heaps dumped by Go. In addition, that tool got its type information from
data structures maintained by the GC algorithm, but as the GC has advanced, it has been
storing less and less type information over time. Because of those issues, we'll have to
make major changes to the tool or perhaps rewrite the whole thing.
Also, the process of getting a heap analysis on the screen from a running Go program involves
multiple tools and dependencies, and is more complicated than it needs to be. There should
be a simple and fast "one-click" solution to make it as easy as possible to understand
what's happening in a program's heap.
## Proposal
TODO(matloob): Some of the details are still fuzzy, but here's the general outline of a solution:
We'll use ELF core dumps as the source format for our heap analysis tools. We would build packages that would use the
debug information in the DWARF section of the dump to find the roots and reconstruct type
information for as much of the program as it can. Implementing this will likely involve improving
the DWARF data produced by the compiler.
Windows doesn't traditionally use core files, and darwin uses mach-o as its core dump format,
so we'll have to provide a mechanism for users on those platforms to extract ELF core dumps
from their programs.
We'd use those packages to build a graphical web-based tool for viewing and analyzing heap dumps.
The program would be pointed to a core dump and would serve a graphical web app that could be used
to analyze the heap.
Ideally, there will be a 'one-click' solution to get from running program to dump. One possible way
to do this would be to add a library to expose a special HTTP handler. Requesting the page would that
would trigger a core dump to a user-specified location on disk while the program's running, and start
the heap dump viewer program.
## Rationale
TODO(matloob): More through discussion.
The primary rationale for this feature is that users want to understand the memory usage of their programs
and we don't currently provide convenient ways of doing that. Adding a heap dump viewer will allow us to
do that.
### Heap dump format
There are three candidates for the format our tools will consume: the current format output by
the Go heap dumper, the hprof format, and the ELF format proposed here.
The advantage of using the current format is that we already have tools that produce it and consume it. But the format
is non-standard and requires a strong dependence between the heap viewer and the runtime. That's been one
of the problems with the current viewer. And the format produced by the runtime has changed slightly in each
of the last few Go releases because it's tightly coupled with the Go runtime.
The advantage of the hprof format is that there already exist many tools for analyzing hprof dumps.
It will be a good idea to consider this format more throughly before making a decision. On the
other hand many of those tools are neither polished nor easy to use. We can probably build
better tools tailored for Go without great effort.
The advantage of understanding ELF is that we can use the same tools to look at cores produced when a program
OOMs (at least on Linux) as we do to examine heap dumps. Another benefit is that some cluster
environments already collect and store core files when programs fail in production. Reusing this
machinery would help Go programmers in those environments. And there already exist tools that grab core dumps
so we might be able to reduce the amount of code in the runtime for producing dumps.
## Compatibility
As long as the compiler can output all necessary data needed to reconstruct type information for the heap
in the DWARF data, we won't need to have a strong dependency on the Go distribution. The code can live in a subrepo
not subject to the Go compatibility guarantee.
## Implementation
The implementation will broadly consist of three parts: First, support in the compiler and runtime for dumping
all the data needed by the viewer; second, 'backend' tools that understand the format; and third, a 'frontend'
viewer for those tools.
### Compiler and Runtime Work
TODO(matloob): more details
The compiler work will mostly be a consist of filling any holes in the DWARF data that we need to recover type
information of data in the heap.
If we decide to use ELF cores, we may need runtime support for dumping cores, especially on platforms that
don't dump cores in ELF format.
### Heap libraries and viewer
We will provide a reusable library that decodes a core file as a Go object graph with partial type information.
Users can build their own tools based on this low-level library, but we also provide a web-based graphical tool for
viewing and querying heap graphs.
These are some of the types of queries we aim to answer with the heap viewer:
* Show a histogram of live variables grouped by typed
* Which variables account for the most memory?
* What is a path from a GC root to this variable?
* How much memory would become garbage if this variable were to become unreachable
or this pointer to become nil?
* What are the inbound/outbound pointer edges to this node (variable)?
* How much memory is used by a variable, considering padding, alignment, and span size?
## Open issues (if applicable)
Most of this proposal is open at this point, including:
* the heap dump format
* the design and implementation of the backend packages
* the tools we use to build the frontend client. |
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/17505-concurrent-rescan.md | # Proposal: Concurrent stack re-scanning
Author(s): Austin Clements, Rick Hudson
Last updated: 2016-10-18
Discussion at https://golang.org/issue/17505.
**Note:** We are not actually proposing this.
This design was developed before proposal #17503, which is a
dramatically simpler solution to the problem of stack re-scanning.
We're posting this design doc for its historical value.
## Abstract
Since the release of the concurrent garbage collector in Go 1.5, each
subsequent release has further reduced stop-the-world (STW) time by
moving more tasks to the concurrent phase.
As of Go 1.7, the only non-trivial STW task is stack re-scanning.
We propose to make stack re-scanning concurrent for Go 1.8, likely
resulting in sub-millisecond worst-case STW times.
## Background
Go's concurrent garbage collector consists of four phases: mark, mark
termination, sweep, and sweep termination.
The mark and sweep phases are *concurrent*, meaning that the
application (the *mutator*) continues to run during these phases,
while the mark termination and sweep termination phases are
*stop-the-world* (STW), meaning that the garbage collector pauses the
mutator for the duration of the phase.
Since Go 1.5, we've been steadily moving tasks from the STW phases to
the concurrent phases, with a particular focus on tasks that take time
proportional to something under application control, such as heap size
or number of goroutines.
As a result, in Go 1.7, most applications have sub-millisecond STW
times.
As of Go 1.7, the only remaining application-controllable STW task is
*stack re-scanning*.
Because of this one task, applications with large numbers of active
goroutines can still experience STW times in excess of 10ms.
Stack re-scanning is necessary because stacks are *permagray* in the
Go garbage collector.
Specifically, for performance reasons, there are no write barriers for
writes to pointers in the current stack frame.
As a result, even though the garbage collector scans all stacks at the
beginning of the mark phase, it must re-scan all modified stacks with
the world is stopped to catch any pointers the mutator "hid" on the
stack.
Unfortunately, this makes STW time proportional to the total amount of
stack that needs to be rescanned.
Worse, stack scanning is relatively expensive (~5ms/MB).
Hence, applications with a large number of active goroutines can
quickly drive up STW time.
## Proposal
We propose to make stack re-scanning concurrent using a *transitive
mark* write barrier.
In this design, we add a new concurrent phase between mark and mark
termination called *stack re-scan*.
This phase starts as soon as the mark phase has marked all objects
reachable from roots *other than stacks*.
The phase re-scans stacks that have been modified since their initial
scan, and enables a special *transitive mark* write barrier.
Re-scanning and the write barrier ensure the following invariant
during this phase:
> *After a goroutine stack G has been re-scanned, all objects locally
> reachable to G are black.*
This depends on a goroutine-local notion of reachability, which is the
set of objects reachable from globals or a given goroutine's stack or
registers.
Unlike regular global reachability, this is not stable: as goroutines
modify heap pointers or communicate, an object that was locally
unreachable to a given goroutine may become locally reachable.
However, the concepts are closely related: a globally reachable object
must be locally reachable by at least one goroutine, and, conversely,
an object that is not locally reachable by any goroutine is not
globally reachable.
This invariant ensures that re-scanning a stack *blackens* that stack,
and that the stack remains black since the goroutine has no way to
find a white object once its stack has been re-scanned.
Furthermore, once every goroutine stack has been re-scanned, marking
is complete.
Every globally reachable object must be locally reachable by some
goroutine and, once every stack has been re-scanned, every object
locally reachable by some goroutine is black, so it follows that every
globally reachable object is black once every stack has been
re-scanned.
### Transitive mark write barrier
The transitive mark write barrier for an assignment `*dst = src`
(where `src` is a pointer) ensures that all objects reachable from
`src` are black *before* writing `src` to `*dst`.
Writing `src` to `*dst` may make any object reachable from `src`
(including `src` itself) locally reachable to some goroutine that has
been re-scanned.
Hence, to maintain the invariant, we must ensure these objects are all
black.
To do this, the write barrier greys `src` and then drains the mark
work queue until there are no grey objects (using the same work queue
logic that drives the mark phase).
At this point, it writes `src` to `*dst` and allows the goroutine to
proceed.
The write barrier must not perform the write until all simultaneous
write barriers are also ready to perform the write.
We refer to this *mark quiescence*.
To see why this is necessary, consider two simultaneous write barriers
for `*D1 = S1` and `*D2 = S2` on an object graph that looks like this:
G1 [b] → D1 [b] S1 [w]
↘
O1 [w] → O2 [w] → O3 [w]
↗
D2 [b] S2 [w]
Goroutine *G1* has been re-scanned (so *D1* must be black), while *Sn*
and *On* are all white.
Suppose the *S2* write barrier blackens *S2* and *O1* and greys *O2*,
then the *S1* write barrier blackens *S1* and observes that *O1* is
already black:
G1 [b] → D1 [b] S1 [b]
↘
O1 [b] → O2 [g] → O3 [w]
↗
D2 [b] S2 [b]
At this point, the *S1* barrier has run out of local work, but the
*S2* barrier is still going.
If *S1* were to complete and write `*D1 = S1` at this point, it would
make white object *O3* reachable to goroutine *G1*, violating the
invariant.
Hence, the *S1* barrier cannot complete until the *S2* barrier is also
ready to complete.
This requirement sounds onerous, but it can be achieved in a simple
and reasonably efficient manner by sharing a global mark work queue
between the write barriers.
This reuses the existing mark work queue and quiescence logic and
allows write barriers to help each other to completion.
### Stack re-scanning
The stack re-scan phase re-scans the stacks of all goroutines that
have run since the initial stack scan to find pointers to white
objects.
The process of re-scanning a stack is identical to that of the initial
scan, except that it must participate in mark quiescence.
Specifically, the re-scanned goroutine must not resume execution until
the system has reached mark quiescence (even if no white pointers are
found on the stack).
Otherwise, the same sorts of races that were described above are
possible.
There are multiple ways to realize this.
The whole stack scan could participate in mark quiescence, but this
would block any contemporaneous stack scans or write barriers from
completing during a stack scan if any white pointers were found.
Alternatively, each white pointer found on the stack could participate
individually in mark quiescence, blocking the stack scan at that
pointer until mark quiescence, and the stack scan could again
participate in mark quiescence once all frames had been scanned.
We propose an intermediate: gather small batches of white pointers
from a stack at a time and reach mark quiescence on each batch
individually, as well as at the end of the stack scan (even if the
final batch is empty).
### Other considerations
Goroutines that start during stack re-scanning cannot reach any white
objects, so their stacks are immediately considered black.
Goroutines can also share pointers through channels, which are often
implemented as direct stack-to-stack copies.
Hence, channel receives also require write barriers in order to
maintain the invariant.
Channel receives already have write barriers to maintain stack
barriers, so there is no additional work here.
## Rationale
The primary drawback of this approach to concurrent stack re-scanning
is that a write barrier during re-scanning could introduce significant
mutator latency if the transitive mark finds a large unmarked region
of the heap, or if overlapping write barriers significantly delay mark
quiescence.
However, we consider this situation unlikely in non-adversarial
applications.
Furthermore, the resulting delay should be no worse than the mark
termination STW time applications currently experience, since mark
termination has to do exactly the same amount of marking work, in
addition to the cost of stack scanning.
### Alternative approaches
An alternative solution to concurrent stack re-scanning would be to
adopt DMOS-style quiescence [Hudson '97].
In this approach, greying any object during stack re-scanning (either
by finding a pointer to a white object on a stack or by installing a
pointer to a white object in the heap) forces the GC to drain this
marking work and *restart* the stack re-scanning phase.
This approach has a much simpler write barrier implementation that is
constant time, so the write barrier would not induce significant
mutator latency.
However, unlike the proposed approach, the amount of work performed by
DMOS-style stack re-scanning is potentially unbounded.
This interacts poorly with Go's GC pacer.
The pacer enforces the goal heap size making allocating and GC work
proportional, but this requires an upper bound on possible GC work.
As a result, if the pacer underestimates the amount of re-scanning
work, it may need to block allocation entirely to avoid exceeding the
goal heap size.
This would be an effective STW.
There is also a hybrid solution: we could use the proposed transitive
marking write barrier, but bound the amount of work it can do (and
hence the latency it can induce).
If the write barrier exceeds this bound, it performs a DMOS-style
restart.
This is likely to get the best of both worlds, but also inherits the
sum of their complexity.
A final alternative would be to eliminate concurrent stack re-scanning
entirely by adopting a *deletion-style* write barrier [Yuasa '90].
This style of write barrier allows the initial stack scan to *blacken*
the stack, rather than merely greying it (still without the need for
stack write barriers).
For full details, see proposal #17503.
## Compatibility
This proposal does not affect the language or any APIs and hence
satisfies the Go 1 compatibility guidelines.
## Implementation
We do not plan to implement this proposal.
Instead, we plan to implement proposal #17503.
The implementation steps are as follows:
1. While not strictly necessary, first make GC assists participate in
stack scanning.
Currently this is not possible, which increases mutator latency at
the beginning of the GC cycle.
This proposal would compound this effect by also blocking GC
assists at the end of the GC cycle, causing an effective STW.
2. Modify the write barrier to be pre-publication instead of
post-publication.
Currently the write barrier occurs after the write of a pointer,
but this proposal requires that the write barrier complete
transitive marking *before* writing the pointer to its destination.
A pre-publication barrier is also necessary for
[ROC](https://golang.org/s/gctoc).
3. Make the mark completion condition precise.
Currently it's possible (albeit unlikely) to enter mark termination
before all heap pointers have been marked.
This proposal requires that we not start stack re-scanning until
all objects reachable from globals are marked, which requires a
precise completion condition.
4. Implement the transitive mark write barrier.
This can reuse the existing work buffer pool lists and logic,
including the global quiescence barrier in getfull.
It may be necessary to improve the performance characteristics of
the getfull barrier, since this proposal will lean far more heavily
on this barrier than we currently do.
5. Check stack re-scanning code and make sure it is safe during
non-STW.
Since this only runs during STW right now, it may omit
synchronization that will be necessary when running during non-STW.
This is likely to be minimal, since most of the code is shared with
the initial stack scan, which does run concurrently.
6. Make stack re-scanning participate in write barrier quiescence.
7. Create a new stack re-scanning phase.
Make mark 2 completion transition to stack re-scanning instead of
mark termination and enqueue stack re-scanning root jobs.
Once all stack re-scanning jobs are complete, transition to mark
termination.
## Acknowledgments
We would like to thank Rhys Hiltner (@rhysh) for suggesting the idea
of a transitive mark write barrier.
## References
[Hudson '97] R. L. Hudson, R. Morrison, J. E. B. Moss, and D. S.
Munro. Garbage collecting the world: One car at a time. In *ACM
SIGPLAN Notices* 32(10):162–175, October 1997.
[Yuasa '90] T. Yuasa. Real-time garbage collection on general-purpose
machines. *Journal of Systems and Software*, 11(3):181–198, 1990.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/57001-gotoolchain.md | # Proposal: Extended forwards compatibility in Go
Russ Cox \
December 2022
Earlier discussion at https://go.dev/issue/55092.
Proposal at https://go.dev/issue/57001.
## Abstract
Many people believe the `go` line in the `go.mod` file specifies which Go toolchain to use.
This proposal would correct this widely held misunderstanding by making it reality.
At the same time, the proposal would improve forward compatibility by making sure
that old Go toolchains never try to build newer Go programs.
Define the “work module” as the one containing the directory
where the go command is run. We sometimes call this the “main module”,
but I am using “work module” in this document for clarity.
Updating the `go` line in the `go.mod` of the work module,
or the `go.work` file in the current workspace,
would change the minimum Go toolchain used to run go commands.
A new `toolchain` line would provide finer-grained control over Go toolchain selection.
An environment variable `GOTOOLCHAIN` would control this new behavior.
The default, `GOTOOLCHAIN=auto`, would use the information in `go.mod`.
Setting GOTOOLCHAIN to something else would override the `go.mod`.
For example, to test the package in the current directory with Go 1.17.2:
GOTOOLCHAIN=go1.17.2 go test
## Background
The meaning of the current `go` line in the `go.mod` file is underdocumented
and widely misunderstood.
- Some people believe it sets the minimum version of Go that can be used to build the code.
This is not true: any version of Go will try to build the code, but an older one will
add a note after any compile failure pointing out that perhaps a newer version of Go is needed.
- Some people believe it sets the exact version of Go to use. This is also not true.
The installed version of go is always what runs today.
These are reasonable beliefs. They are just not true.
Today, the only purpose of the `go` line is to determine the Go language version
that the compiler uses when compiling a particular source file.
If a module's `go.mod` says `go 1.16`, then the compiler makes sure to
provide the Go 1.16 language semantics when compiling source files
inside that module.
For example, Go 1.13 added `0o777` syntax for octal literals.
If `go.mod` says `go 1.12`, then the compiler rejects code containing `0o777`.
If `go.mod` says `go 1.13`, then the compiler accepts `0o777`.
Of course, a `go.mod` that says `go 1.13` might still only use Go 1.12 features.
To improve compatibility and avoid ecosystem fragmentation, Go 1.12 will still
try to compile code marked `go 1.13`. If it succeeds, the `go` command
assumes everything went well.
If it fails
(for example, because the code says `0o777` and the compiler
does not know what that means), then the `go` command prints
a notice about the `go 1.13` line after the actual failure,
in case what the user needs to know is to update to Go 1.13 or later.
These version failures are often mysterious, since the compiler errors
betray the older Go's complete and utter confusion
at the new program,
which in turn confuse the developers running the build.
Printing the version mismatch note at the end is better than not printing it,
but it's still not a great experience.
We can improve this experience by having the older Go version
download and re-exec a newer Go version
when the go.mod file needs one.
In this hypothetical world, the Go 1.12 `go` command would
see that it is too old and then download and use Go 1.13 for the build instead.
To be clear, Go 1.12 didn't work this way and never will.
But I propose that some future version of Go should.
Automatic downloading and use of the version of the Go toolchain
listed in the `go.mod` file
would match the automatic download and use
of the versions of required modules listed in the `go.mod` file.
It would also give code a simple way to declare that it needs a newer Go toolchain,
for example because it depends on a bug fix issued in that toolchain.
[Cloud Native Buildpacks](https://buildpacks.io/) are an example of the
bad effects of misunderstanding the meaning of the `go` line.
Today they actually _do_ use the line to select the Go toolchain:
if you have a module that says `go 1.12`, whether you are
trying to keep compatibility with the Go 1.12 language
or you just started out using Go 1.12 and have not needed
to update the line to access any new language features,
Cloud Native Buildpacks will always _build_ your code with Go 1.12,
even if much newer releases of Go exist.
Specifically, they will use the latest point release of Go 1.12.
This choice is unfortunate for two reasons.
First, people are using older releases of Go than they realize.
Second, this leads to non-repeatable builds.
Despite our being very careful, it can of course happen
that code that worked with Go 1.12.8 does not work with Go 1.12.9:
perhaps the code depended on the bug being fixed.
With Cloud Native Buildpacks, a deployment that works today
may break tomorrow if Go 1.12.9 has been released in the interim,
because the chosen release of Go changes based on details not controlled by
the `go.mod`.
If we accidentally issued a Go 1.12.9 that broke all Go programs running in containers,
then every Cloud Native Buildpack user with a `go 1.12` line
would have get broken builds on their next redeploy
without ever asking to update to Go 1.12.9.
This is a perfect example of a [low-fidelity build](https://research.swtch.com/vgo-mvs).
The GitHub Action `setup-go` does something similar
with its [`go-version-file` directive](https://github.com/actions/setup-go#getting-go-version-from-the-gomod-file).
It has the same problems that Cloud Native Buildpacks do.
On the other hand, we can also take Cloud Native Buildpacks and the `setup-go` GitHub Action as
evidence that people expect that line to select the Go toolchain,
at least in the work module.
After all, when a module says `require golang.org/x/sys v0.0.1`,
we all understand that means any build of the module uses that version or later.
Why does `go 1.12` _not_ mean that?
I propose that it should.
For more fine-grained control, I also propose a new `toolchain` line in `go.mod`.
One final feature of treating the `go` version this way is that
it would provide a way to fix for loop scoping,
as discussed in [discussion #56010](https://github.com/golang/go/discussions/56010).
If we make that change, older Go toolchains must not assume
that they can compile newer Go code successfully just because
there are no compiler errors. So this proposal is a prerequisite
for any proposal to do the loop change.
See also my [talk on this topic at GopherCon](https://www.youtube.com/watch?v=v24wrd3RwGo).
## Proposal
The proposal has five parts:
- the GOTOOLCHAIN environment and configuration variable,
- a change to the way the `go` line is interpreted in the work module along with a new `toolchain` line,
- changes to `go get` to allow updating the `go` toolchain,
- a special case to allow Go distributions to be downloaded like modules,
- and changing the `go` command startup procedure.
### The GOTOOLCHAIN environment and configuration variable
The GOTOOLCHAIN environment variable,
configurable as usual with `go env -w`,
will control which toolchain of Go runs when you run `go`.
Specifically, a new enough installed Go toolchain
will know to consult GOTOOLCHAIN and potentially download
and re-exec a different toolchain before proceeding.
This will allow invocations like
GOTOOLCHAIN=go1.17.2 go test
to test a package with Go 1.17.2. Similarly, to try a release candidate:
GOTOOLCHAIN=go1.18rc1 go build -o myprog.exe
Setting `GOTOOLCHAIN=local` will mean to use the locally installed Go toolchain,
never downloading a different one; this is the behavior we have today.
Setting `GOTOOLCHAIN=auto` will mean to use the release named in the
in the work module's `go.mod` when it is newer than the locally installed Go toolchain.
The default setting of GOTOOLCHAIN will depend on the Go toolchain.
Standard Go releases will default to `GOTOOLCHAIN=auto`,
delegating control to the `go.mod` file.
This is the behavior essentially all Go user would see as the default.
Development toolchains—what you get by checking out the Go repository
and running `make.bash`—will default to `GOTOOLCHAIN=local`.
This is necessary for developers of Go itself, so that when working on Go
you actually use the copy you're working on and not a different copy of Go.
Once the toolchain is selected, it would still look at the `go` version:
if the `go` version is newer than the toolchain being run,
the toolchain will refuse to build the program:
Go 1.29 would refuse to attempt to build code that declares `go 1.30`.
### The `go` and `toolchain` lines in `go.mod` in the work module
The `go` line in the `go.mod` in the work module selects the Go semantics.
When the locally installed Go toolchain is newer than the `go` line,
it provides the requested older semantics directly, instead of invoking a stale toolchain.
([Proposal #56986](https://go.dev/issue/56986) addresses making the older semantics more accurate.)
But if the `go` line names a newer Go toolchain, then the locally installed
Go toolchain downloads and runs the newer toolchain.
For example, if we are running Go 1.30 and have a `go.mod` that says
go 1.30.1
then Go 1.30 would download and invoke Go 1.30.1 to complete the command.
On the other hand, if the `go.mod` says
go 1.20rc1
then Go 1.30 will provide the Go 1.20rc1 semantics itself instead of running the
Go 1.20 rc1 toolchain.
Developers may want to run a newer toolchain but with older language semantics.
To enable this, the `go.mod` file would also support a new `toolchain` line.
If present, the `toolchain` line would specify the toolchain to use,
and the `go` line would only specify the Go version for language semantics.
For example:
go 1.18
toolchain go1.20rc1
would select the Go 1.18 semantics for this module but use Go 1.20 rc1 to build
(all still assuming `GOTOOLCHAIN=auto`; the environment variable
overrides the `go.mod` file).
In contrast to the older/newer distinction with the `go` line,
the `toolchain` line always applies: if Go 1.30 sees a `go.mod`
that says `toolchain go1.20rc1`, then it downloads Go 1.20 rc1.
The syntax `toolchain local` would be like setting `GOTOOLCHAIN=local`,
indicating to always use the locally installed toolchain.
### Updating the Go toolchain with `go get`
As part of this proposal, the `go get` command would change
to maintain the `go` and `toolchain` lines.
When updating module requirements during `go get`,
the `go` command would determine the minimum toolchain
required by taking the minimum of all the `go` lines in the
modules in the build graph; call that Go 1.M.
Then the `go` command would make sure the work module's `go.mod`
specifies a toolchain of Go 1.M beta 1 or later.
If so, no change is needed and the `go` and `toolchain` lines
are left as they are.
On the other hand, if a change is needed, the `go` command would edit the `toolchain` line
or add a new one, set to the latest Go 1.M patch release Go 1.M.P.
If Go 1.M is no longer supported, the `go` command
would use the minimum supported major version instead.
The command `go get go@1.20.1` would modify the `go` line to say `go 1.20.1`.
If the `toolchain` line is too old, then the update process just described would apply,
except that since the result would be matching `go` and `toolchain` lines,
the `toolchain` line would just be removed instead.
For direct control of the toolchain, `go get toolchain@go1.20.1` would
update the `toolchain` line. If too old a toolchain is specified, the command fails.
(It does not downgrade module dependencies to find a way to use an older toolchain.)
Updates like `go get go@latest` (or just `go get go`), `go get -p go`, and `go get toolchain@latest`
would work too.
### Downloading distributions
We have a mechanism for downloading verified software archives today:
the Go module system, including the checksum database.
This design would reuse that mechanism for Go distributions.
Each Go release would be treated as a set of module versions,
downloaded like any module, and checked against the checksum database
before being used.
In addition to this mapping, the `go` command would need to
set the execute bit on downloaded binaries.
This would be the first time we set the execute bit in the module cache,
at least on file systems with execute bits.
(On Windows, whether a file is executable depends only on its extension.)
The execute bit would only be set for the specific case of downloading
Go release modules, and only for the tool binaries.
A version like `go 1.18beta2` would map into the module download
machinery as `golang.org/release` version `v1.18.0-beta2.windows.amd64`
on a Windows/AMD64 system.
The version list (the `/@v/list` file) for the module would only list supported releases,
for use by the `go` command in toolchain updates.
Older releases would still be available when fetched directly,
just not listed in the default version list.
### Go command startup
At startup, before doing anything else, the `go` command would
find the `GOTOOLCHAIN` environment or configuration variable
and the `go` and `toolchain` lines from the work module's `go.mod` file
(or the workspace's `go.work` file)
and check whether it needs to use a different toolchain.
If not (for example, if `GOTOOLCHAIN=local` or if `GOTOOLCHAIN=auto`
and `go.mod` says `go 1.28` and the `go` command knows it is
already the Go 1.28 distribution), then the `go` command continues executing.
Otherwise, it looks for the requested Go release in the module cache,
downloading and unpacking it if needed,
and then re-execs the `go` command from that release.
### Effect in Dependencies
In a dependency module, the `go` line will continue to have
its “language semantics selection” effect,
as described earlier.
The Go toolchain will refuse to build a dependency that needs
newer Go semantics than the current toolchain.
For example if the work module says `go 1.27`
but a dependency says `go 1.28` and the toolchain
selection ends up using Go 1.27, Go 1.27 will see the
`go 1.28` line and refuse to build.
This should normally not happen:
the `go get` command that added the dependency
would have noticed the `go 1.28` line and
updated the work module's `toolchain` line to at least go1.28.
## Rationale
The rationale for the overall change was discussed in the background section.
People initially believe that every version listed a module's
`go.mod` is the minimum version used in any build of that module.
This is true except for the `go` line.
Systems such as Cloud Native Buildpacks have made the
`go` line select the Go toolchain, confusing matters further.
Making the `go` line specify a minimum toolchain version
better aligns with user expectations.
It would also align better with systems like Cloud Native Buildpacks,
although they should be updated to match the new semantics exactly.
The easiest way to do that would be for them to run a Go toolchain
that implements the new rules and let it do its default toolchain selection.
There is a potential downside for CI systems without local download caches:
they might download the Go release modules over and over again.
Of course, such systems already download ordinary modules over and over again,
but ordinary modules tend to be smaller.
Go 1.20 removes all `pkg/**.a` files from the Go distribution,
which cuts the distribution size by about a factor of three.
We may be able to cut the size further in Go 1.21.
The best solution is for CI systems to run local caching proxies,
which would speed up their ordinary module downloads too.
Of course, given the choice between
(1) having to wait for a CI system (or a Linux distribution, or a cloud provider)
to update the available version of Go and
(2) being able to use any Go version at the cost of slightly slower builds, I'd definitely choose (2).
And CI systems that insist on never downloading could force GOTOOLCHAIN=local in the environment,
and then the build will break if a newer `go` line slips into `go.mod`.
Some people have raised a concern about pressure on the build cache
because builds using different toolchains cannot share object files.
If this turns out to be a problem in practice, we can definitely adjust
the build cache maintenance algorithms. [Issue #29561](https://go.dev/issue/29561) tracks that.
## Compatibility
This proposal does not violate any existing compatibility requirements.
It can improve compatibility, for example by making sure that code written for Go 1.30
is never built with Go 1.29, even if the build appears to succeed.
## Implementation
Overall the implementation is fairly short and straightforward.
Documentation probably outweighs new code.
Russ Cox, Michael Matloob, and Bryan Millls will do the work.
There is no working sketch of the current design at the moment.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/25530-notary.md | Moved to [golang.org/design/25530-sumdb](https://golang.org/design/25530-sumdb).
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/2775-binary-only-packages.md | # Proposal: Binary-Only Packages
Author: Russ Cox
Last updated: April 24, 2016
Discussion at [golang.org/issue/2775](https://golang.org/issue/2775).
## Abstract
We propose a way to incorporate binary-only packages (without complete source code) into a cmd/go workspace.
## Background
It is common in C for a code author to provide a C header file and the compiled form of a library
but not the complete source code.
The go command has never supported this officially.
In very early versions of Go, it was possible to arrange for a binary-only package simply by
removing the source code after compiling.
But that state looks the same as when the package source code has been deleted
because the package itself is no longer available, in which case the compiled form
should not continue to be used.
For the past many years the Go command has assumed the latter.
Then it was possible to arrange for a binary-only package by replacing the source code
after compiling, while keeping the modification time of the source code older than
the modification time of the compiled form.
But in normal usage, removing an individual source file is cause for recompilation
even though that cannot be seen in the modification times.
To detect that situation,
Go 1.5 started using the full set of source file names that went into
a package as one input to a hash that produced the package's ``build ID''.
If the go command's expected build ID does not match the compiled package's
build ID, the compiled package is out of date, even if the modification times suggest
otherwise (see [golang.org/cl/9154](https://golang.org/cl/9154)).
From Go 1.5 then, to arrange for a binary-only package,
it has been necessary to replace the source code after compiling
but keep the same set of file names and also keep the source
modification times older than the compiled package's.
In the future we may experiment with including the source code itself
in the hash that produces the build ID, which would completely
defeat any attempt at binary-only packages.
Fundamentally, as time goes on the go command gets better and better at detecting
mismatches between the source code and the compiled form,
yet in some cases it is explicitly desired that the source code not match
the compiled form (specifically, that the source code not be included at all).
If this usage is to keep working, it must be explicitly supported.
## Proposal
We propose to add official support for binary-only packages to the cmd/go toolchain,
by introduction of a new `//go:binary-only-package` comment.
The go/build package's type Package will contain a new field `IncompleteSources bool` indicating
whether the `//go:binary-only-package` comment is present.
The go command will refuse to recompile a package containing the comment.
If a suitable binary form of the package is already installed, the go command will use it.
Otherwise the go command will report that the binary form is missing and cannot be built.
Users must install the package binary into the correct location in the $GOPATH/pkg tree
themselves. Distributors of binary-only packages might distribute
them as .zip files to be unpacked in the root of a $GOPATH, including files in both the src/ and pkg/
tree.
The “go get” command will still require complete source code and will not
recognize or otherwise enable the distribution of binary-only packages.
## Rationale
Various users have reported working with companies that want to provide them with
binary but not source forms of purchased packages.
We want to define an explicit way to do that instead of fielding bug reports
each time the go command gets smarter about detecting source-vs-binary mismatches.
The package source code itself must be present in some form,
or else we can't tell if the package was deleted entirely (see background above).
The implication is that it will simply not be the actual source code for the package.
A special comment is a natural way to signal this situation,
especially since the go command is already reading the source code
for package name, import information, and build tag comments.
Having a “fake” version of the source code also provides a way to supply
documentation compatible with “go doc” and “godoc”
even though the complete source code is missing.
The compiled form of the package does contain information about the source code,
for example source file names, type definitions for data structures used in the
public API, and inlined function bodies. It is assumed that the distributors of
binary-only packages understand that they include this information.
## Compatibility
There are no problems raised by the
[compatibility guidelines](https://golang.org/doc/go1compat).
If anything, the explicit support will help keep such binary-only packages
working better than they have in the past.
To the extent that tools process source code and not compiled packages,
those tools will not work with binary-only packages.
The compiler and linker will continue to enforce that all packages be compiled
with the same version of the toolchain: a binary-only package built with Go 1.4
will not work with Go 1.5.
Authors and users of binary-only packages must live with these implications.
## Implementation
The implementation is essentially as described in the proposal section above.
One additional detail is that the go command must load the build ID for the package
in question from the compiled binary form directly, instead of deriving it from the
source files.
I will implement this change for Go 1.7.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/40724-register-calling.md | # Proposal: Register-based Go calling convention
Author: Austin Clements, with input from Cherry Zhang, Michael
Knyszek, Martin Möhrmann, Michael Pratt, David Chase, Keith Randall,
Dan Scales, and Ian Lance Taylor.
Last updated: 2020-08-10
Discussion at https://golang.org/issue/40724.
## Abstract
We propose switching the Go ABI from its current stack-based calling
convention to a register-based calling convention.
[Preliminary experiments
indicate](https://github.com/golang/go/issues/18597#issue-199914923)
this will achieve at least a 5–10% throughput improvement across a
range of applications.
This will remain backwards compatible with existing assembly code that
assumes Go’s current stack-based calling convention through Go’s
[multiple ABI
mechanism](https://golang.org/design/27539-internal-abi).
## Background
Since its initial release, Go has used a *stack-based calling
convention* based on the Plan 9 ABI, in which arguments and result
values are passed via memory on the stack.
This has significant simplicity benefits: the rules of the calling
convention are simple and build on existing struct layout rules; all
platforms can use essentially the same conventions, leading to shared,
portable compiler and runtime code; and call frames have an obvious
first-class representation, which simplifies the implementation of the
`go` and `defer` statements and reflection calls.
Furthermore, the current Go ABI has no *callee-save registers*,
meaning that no register contents live across a function call (any
live state in a function must be flushed to the stack before a call).
This simplifies stack tracing for garbage collection and stack growth
and stack unwinding during panic recovery.
Unfortunately, Go’s stack-based calling convention leaves a lot of
performance on the table.
While modern high-performance CPUs heavily optimize stack access,
accessing arguments in registers is still roughly [40%
faster](https://gist.github.com/aclements/ded22bb8451eead8249d22d3cd873566)
than accessing arguments on the stack.
Furthermore, a stack-based calling convention, especially one with no
callee-save registers, induces additional memory traffic, which has
secondary effects on overall performance.
Most language implementations on most platforms use a register-based
calling convention that passes function arguments and results via
registers rather than memory and designates some registers as
callee-save, allowing functions to keep state in registers across
calls.
## Proposal
We propose switching the Go ABI to a register-based calling
convention, starting with a minimum viable product (MVP) on amd64, and
then expanding to other architectures and improving on the MVP.
We further propose that this calling convention should be designed
specifically for Go, rather than using platform ABIs.
There are several reasons for this.
It’s incredibly tempting to use the platform calling convention, as it
seems that would allow for more efficient language interoperability.
Unfortunately, there are two major reasons it would do little good,
both related to the scalability of goroutines, a central feature of
the Go language.
One reason goroutines scale so well is that the Go runtime dynamically
resizes their stacks, but this imposes requirements on the ABI that
aren’t satisfied by non-Go functions, thus requiring the runtime to
transition out of the dynamic stack regime on a foreign call.
Another reason is that goroutines are scheduled by the Go runtime
rather than the OS kernel, but this means that transitions to and from
non-Go code must be communicated to the Go scheduler.
These two things mean that sharing a calling convention wouldn’t
significantly lower the cost of calling non-Go code.
The other tempting reason to use the platform calling convention would
be tooling interoperability, particularly with debuggers and profiling
tools.
However, these almost universally support DWARF or, for profilers,
frame pointer unwinding.
Go will continue to work with DWARF-based tools and we can make the Go
ABI compatible with platform frame pointer unwinding without otherwise
taking on the platform ABI.
Hence, there’s little upside to using the platform ABI.
And there are several reasons to favor using our own ABI:
- Most existing ABIs were based on the C language, which differs in
important ways from Go.
For example, most ELF ABIs (at least x64-64, ARM64, and RISC-V)
would force Go slices to be passed on the stack rather than in
registers because the slice header is three words.
Similarly, because C functions rarely return more than one word,
most platform ABIs reserve at most two registers for results.
Since Go functions commonly return at least three words (a result
and a two word error interface value), the platform ABI would force
such functions to return values on the stack.
Other things that influence the platform ABI include that array
arguments in C are passed by reference rather than by value and
small integer types in C are promoted to `int` rather than retaining
their type.
Hence, platform ABIs simply aren’t a good fit for the Go language.
- Platform ABIs typically define callee-save registers, which place
substantial additional requirements on a garbage collector.
There are alternatives to callee-save registers that share many of
their benefits, while being much better suited to Go.
- While platform ABIs are generally similar at a high level, their
details differ in myriad ways.
By defining our own ABI, we can follow a common structure across all
platforms and maintain much of the cross-platform simplicity and
reliability of Go’s stack-based calling convention.
The new calling convention will remain backwards-compatible with
existing assembly code that’s based on the stack-based calling
convention via Go’s [multiple ABI
mechanism](https://golang.org/design/27539-internal-abi).
This same multiple ABI mechanism allows us to continue to evolve the
Go calling convention in future versions.
This lets us start with a simple, minimal calling convention and
continue to optimize it in the future.
The rest of this proposal outlines the work necessary to switch Go to
a register-based calling convention.
While it lays out the requirements for the ABI, it does not describe a
specific ABI.
Defining a specific ABI will be one of the first implementation steps,
and its definition should reside in a living document rather than a
proposal.
## Go’s current stack-based ABI
We give an overview of Go’s current ABI to give a sense of the
requirements of any Go ABI and because the register-based calling
convention builds on the same concepts.
In the stack-based Go ABI, when a function F calls a function or
method G, F reserves space in its own stack frame for G’s receiver (if
it’s a method), arguments, and results.
These are laid out in memory as if G’s receiver, arguments, and
results were simply fields in a struct.
There is one exception to all call state being passed on the stack: if
G is a closure, F passes a pointer to its function object in a
*context register*, via which G can quickly access any closed-over
values.
Other than a few fixed-function registers, all registers are
caller-save, meaning F must spill any live state in registers to its
stack frame before calling G and reload the registers after the call.
The Go ABI also keeps a pointer to the runtime structure representing
the current goroutine (“G”) available for quick access.
On 386 and amd64, it is stored in thread-local storage; on all other
platforms, it is stored in a dedicated register.<sup>1</sup>
Every function must ensure sufficient stack space is available before
reserving its stack frame.
The current stack bound is stored in the runtime goroutine structure,
which is why the ABI keeps this readily accessible.
The standard prologue checks the stack pointer against this bound and
calls into the runtime to grow the stack if necessary.
In assembly code, this prologue is automatically generated by the
assembler itself.
Cooperative preemption is implemented by poisoning a goroutine’s stack
bound, and thus also makes use of this standard prologue.
Finally, both stack growth and the Go garbage collector must be able
to find all live pointers.
Logically, function entry and every call instruction has an associated
bitmap indicating which slots in the local frame and the function’s
argument frame contain live pointers.
Sometimes liveness information is path-sensitive, in which case a
function will have additional [*stack
object*](https://golang.org/cl/134155) metadata.
In all cases, all pointers are in known locations on the stack.
<sup>1</sup> This is largely a historical accident.
The G pointer was originally stored in a register on 386/amd64.
This is ideal, since it’s accessed in nearly every function prologue.
It was moved to TLS in order to support cgo, since transitions from C
back to Go (including the runtime signal handler) needed a way to
access the current G.
However, when we added ARM support, it turned out accessing TLS in
every function prologue was far too expensive on ARM, so all later
ports used a hybrid approach where the G is stored in both a register
and TLS and transitions from C restore it from TLS.
## ABI design recommendations
Here we lay out various recommendations for the design of a
register-based Go ABI.
The rest of this document assumes we’ll be following these
recommendations.
1. Common structure across platforms.
This dramatically simplifies porting work in the compiler and
runtime.
We propose that each architecture should define a sequence of
integer and floating point registers (and in the future perhaps
vector registers), plus size and alignment constraints, and that
beyond this, the calling convention should be derived using a
shared set of rules as much as possible.
1. Efficient access to the current goroutine pointer and the context
register for closure calls.
Ideally these will be in registers; however, we may use TLS on
architectures with extremely limited registers (namely, 386).
1. Support for many-word return values.
Go functions frequently return three or more words, so this must be
supported efficiently.
1. Support for scanning and adjusting pointers in register arguments
on stack growth.
Since the function prologue checks the stack bound before reserving
a stack frame, the runtime must be able to spill argument registers
and identify those containing pointers.
1. First-class generic call frame representation.
The `go` and `defer` statements as well as reflection calls need to
manipulate call frames as first-class, in-memory objects.
Reflect calls in particular are simplified by a common, generic
representation with fairly generic bridge code (the compiler could
generate bridge code for `go` and `defer`).
1. No callee-save registers.
Callee-save registers complicate stack unwinding (and garbage
collection if pointers are allowed in callee-save registers).
Inter-function clobber sets have many of the benefits of
callee-save registers, but are much simpler to implement in a
garbage collected language and are well-suited to Go’s compilation
model.
For an MVP, we’re unlikely to implement any form of live registers
across calls, but we’ll want to revisit this later.
1. Where possible, be compatible with platform frame-pointer unwinding
rules.
This helps Go interoperate with system-level profilers, and can
potentially be used to optimize stack unwinding in Go itself.
There are also some notable non-requirements:
1. No compatibility with the platform ABI (other than frame pointers).
This has more downsides and upsides, as described above.
1. No binary compatibility between Go versions.
This is important for shared libraries in C, but Go already
requires all shared libraries in a process to use the same Go
toolchain version.
This means we can continue to evolve and improve the ABI.
## Toolchain changes overview
This section outlines the changes that will be necessary to the Go
build toolchain and runtime.
The "Detailed design" section will go into greater depth on some of
these.
### Compiler
*Abstract argument registers*: The compiler’s register allocator will
need to allocate function arguments and results to the appropriate
registers.
However, it needs to represent argument and result registers in a
platform-independent way prior to architecture lowering and register
allocation.
We propose introducing generic SSA values to represent the argument
and result registers, as done in [David Chase’s
prototype](https://golang.org/cl/28832).
These would simply represent the *i*th argument/result register and
register allocation would assign them to the appropriate architecture
registers.
Having a common ABI structure across platforms means the
architecture-independent parts of the compiler would only need to know
how many argument/result registers the target architecture has.
*Late call lowering*: Call lowering and argument frame construction
currently happen during AST to SSA lowering, which happens well before
register allocation.
Hence, we propose moving call lowering much later in the compilation
process.
Late call lowering will have knock-on effects, as the current approach
hides a lot of the structure of calls from most optimization passes.
*ABI bridges*: For compatibility with existing assembly code, the
compiler must generate ABI bridges when calling between Go
(ABIInternal) and assembly (ABI0) code, as described in the [internal
ABI proposal](https://golang.org/design/27539-internal-abi).
These are small functions that translate between ABIs according to a
function’s type.
While the compiler currently differentiates between the two ABIs
internally, since they’re actually identical right now, it currently
only generates *ABI aliases* and has no mechanism for generating ABI
bridges.
As a post-MVP optimization, the compiler should inline these ABI
bridges where possible.
*Argument GC map*: The garbage collector needs to know which arguments
contain live pointers at function entry and at any calls (since these
are preemption points).
Currently this is represented as a bitmap over words in the function’s
argument frame.
With the register-based ABI, the compiler will need to emit a liveness
map for argument registers for the function entry point.
Since initially we won't have any live registers across calls, live
arguments will be spilled to the stack at a call, so the compiler does
*not* need to emit register maps at calls.
For functions that still require a stack argument frame (because their
arguments don’t all fit in registers), the compiler will also need to
emit argument frame liveness maps at the same points it does today.
*Traceback argument maps*: Go tracebacks currently display a simple
word-based hex dump of a function’s argument frame.
This is not particularly user-friendly nor high-fidelity, but it can
be incredibly valuable for debugging.
With a register-based ABI, there’s a wide range of possible designs
for retaining this functionality.
For an MVP, we propose trying to maintain a similar level of fidelity.
In the future, we may want more detailed maps, or may want to simply
switch to using DWARF location descriptions.
To that end, we propose that the compiler should emit two logical
maps: a *location map* from (PC, argument word index) to
register/`stack`/`dead` and a *home map* from argument word index to
stack home (if any).
Since a named variable’s stack spill home is fixed if it ever spills,
the location map can use a single distinguished value for `stack` that
tells the runtime to refer to the home map.
This approach works well for an ABI that passes argument values in
separate registers without packing small values.
The `dead` value is not necessarily the same as the garbage
collector’s notion of a dead slot: for the garbage collector, you want
slots to become dead as soon as possible, while for debug printing,
you want them to stay live as long as possible (until clobbered by
something else).
The exact encoding of these tables is to be determined.
Most likely, we’ll want to introduce pseudo-ops for representing
changes in the location map that the `cmd/internal/obj` package can
then encode into `FUNCDATA`.
The home map could be produced directly by the compiler as `FUNCDATA`.
*DWARF locations*: The compiler will need to generate DWARF location
lists for arguments and results.
It already has this ability for local variables, and we should reuse
that as much as possible.
We will need to ensure Delve and GDB are compatible with this.
Both already support location lists in general, so this is unlikely to
require much (if any) work in these debuggers.
Clobber sets will require further changes, which we discuss later.
We propose not implementing clobber sets (or any form of callee-save)
for the MVP.
### Linker
The linker requires relatively minor changes, all related to ABI
bridges.
*Eliminate ABI aliases*: Currently, the linker resolves ABI aliases
generated by the compiler by treating all references to a symbol
aliased under one ABI as references to the symbol another the other
ABI.
Once the compiler generates ABI bridges rather than aliases, we can
remove this mechanism, which is likely to simplify and speed up the
linker somewhat.
*ABI name mangling*: Since Go ABIs work by having multiple symbol
definitions under the same name, the linker will also need to
implement a name mangling scheme for non-Go symbol tables.
### Runtime
*First-class call frame representation*: The `go` and `defer`
statements and reflection calls must manipulate call frames as
first-class objects.
While the requirements of these three cases differ, we propose having
a common first-class call frame representation that can capture a
function’s register and stack arguments and record its register and
stack results, along with a small set of generic call bridges that
invoke a call using the generic call frame.
*Stack growth*: Almost every Go function checks for sufficient stack
space before opening its local stack frame.
If there is insufficient space, it calls into the `runtime.morestack`
function to grow the stack.
Currently, `morestack` saves only the calling PC, the stack pointer,
and the context register (if any) because these are the only registers
that can be live at function entry.
With register-based arguments, `morestack` will also have to save all
argument registers.
We propose that it simply spill all *possible* argument registers
rather than trying to be specific to the function; `morestack` is
relatively rare, so the cost is this is unlikely to be noticeable.
It’s likely possible to spill all argument registers to the stack
itself: every function that can grow the stack ensures that there’s
room not only for its local frame, but also for a reasonably large
“guard” space.
`morestack` can spill into this guard space.
The garbage collector can recognize `morestack`’s spill space and use
the argument map of its caller as the stack map of `morestack`.
*Runtime assembly*: While Go’s multiple ABI mechanism makes it
generally possible to transparently call between Go and assembly code
even if they’re using different ABIs, there are runtime assembly
functions that have deep knowledge of the Go ABI and will have to be
modified.
This includes any function that takes a closure (`mcall`,
`systemstack`), is called in a special context (`morestack`), or is
involved in reflection-like calls (`reflectcall`, `debugCallV1`).
*Cgo wrappers*: Generated cgo wrappers marked with
`//go:cgo_unsafe_args` currently access their argument structure by
casting a pointer to their first argument.
This violates the `unsafe.Pointer` rules and will no longer work with
this change.
We can either special case `//go:cgo_unsafe_args` functions to use
ABI0 or change the way these wrappers are generated.
*Stack unwinding for panic recovery*: When a panic is recovered, the
Go runtime must unwind the panicking stack and resume execution after
the deferred call of the recovering function.
For the MVP, we propose not retaining any live registers across calls,
in which case stack unwinding will not have to change.
This is not the case with callee-save registers or clobber sets.
*Traceback argument printing*: As mentioned in the compiler section,
the runtime currently prints a hex dump of function arguments in panic
tracebacks.
This will have to consume the new traceback argument metadata produced
by the compiler.
## Detailed design
This section dives deeper into some of the toolchain changes described
above.
We’ll expand this section over time.
### `go`, `defer` and reflection calls
Above we proposed using a first-class call frame representation for
`go` and `defer` statements and reflection calls with a small set of
call bridges.
These three cases have somewhat different requirements:
- The types of `go` and `defer` calls are known statically, while
reflect calls are not.
This means the compiler could statically generate bridges to
unmarshall arguments for `go` and `defer` calls, but this isn’t an
option for reflection calls.
- The return values of `go` and `defer` calls are always ignored,
while reflection calls must capture results.
This means a call bridge for a `go` or `defer` call can be a tail
call, while reflection calls can require marshalling return values.
- Call frames for `go` and `defer` calls are long-lived, while
reflection call frames are transient.
This means the garbage collector must be able to scan `go` and
`defer` call frames, while we could use non-preemptible regions for
reflection calls.
- Finally, `go` call frames are stored directly on the stack, while
`defer` and reflection call frames may be constructed in the heap.
This means the garbage collector must be able to construct the
appropriate stack map for `go` call frames, but `defer` and
reflection call frames can use the heap bitmap.
It also means `defer` and reflection calls that require stack
arguments must copy that part of the call frame from the heap to the
stack, though we don’t expect this to be the common case.
To satisfy these requirements, we propose the following generic
call-frame representation:
```
struct {
pc uintptr // PC of target function
nInt, nFloat uintptr // # of int and float registers
ints [nInt]uintptr // Int registers
floats [nFloat]uint64 // Float registers
ctxt uintptr // Context register
stack [...]uintptr // Stack arguments/result space
}
```
`go` calls can build this structure on the new goroutine stack and the
call bridge can pop the register part of this structure from the
stack, leaving just the `stack` part on the stack, and tail-call `pc`.
The garbage collector can recognize this call bridge and construct the
stack map by inspecting the `pc` in the call frame.
`defer` and reflection calls can build frames in the heap with the
appropriate heap bitmap.
The call bridge in these cases must open a new stack frame, copy
`stack` to the stack, load the register arguments, call `pc`, and then
copy the register results and the stack results back to the in-heap
frame (using write barriers where necessary).
It may be valuable to have optimized versions of this bridge for
tail-calls (always the case for `defer`) and register-only calls
(likely a common case).
In the register-only reflection call case, the bridge could take the
register arguments as arguments itself and return register results as
results; this would avoid any copying or write barriers.
## Compatibility
This proposal is Go 1-compatible.
While Go assembly is not technically covered by Go 1 compatibility,
this will maintain compatibility with the vast majority of assembly
code using Go’s [multiple ABI
mechanism](https://golang.org/design/27539-internal-abi).
This translates between Go’s existing stack-based calling convention
used by all existing assembly code and Go’s internal calling
convention.
There are a few known forms of unsafe code that this change will
break:
- Assembly code that invokes Go closures.
The closure calling convention was never publicly documented, but
there may be code that does this anyway.
- Code that performs `unsafe.Pointer` arithmetic on pointers to
arguments in order to observe the contents of the stack.
This is a violation of the [`unsafe.Pointer`
rules](https://pkg.go.dev/unsafe#Pointer) today.
## Implementation
We aim to implement a minimum viable register-based Go ABI for amd64
in the 1.16 time frame.
As of this writing (nearing the opening of the 1.16 tree), Dan Scales
has made substantial progress on ABI bridges for a simple ABI change
and David Chase has made substantial progress on late call lowering.
Austin Clements will lead the work with David Chase and Than McIntosh
focusing on the compiler side, Cherry Zhang focusing on aspects that
bridge the compiler and runtime, and Michael Knyszek focusing on the
runtime.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/11970-decentralized-gc.md | # Proposal: Decentralized GC coordination
Author(s): Austin Clements
Last updated: 2015-10-25
Discussion at https://golang.org/issue/11970.
## Abstract
The Go 1.5 GC is structured as a straight-line coordinator goroutine
plus several helper goroutines. All state transitions go through the
coordinator. This makes state transitions dependent on the scheduler,
which can delay transitions, in turn extending the length of the GC
cycle, blocking allocation, and occasionally leading to long delays.
We propose to replace this straight-line coordinator with an explicit
state machine where state transitions can be performed by any
goroutine.
## Background
As of Go 1.5, all GC phase changes are managed through straight-line
code in the `runtime.gc` function, which runs on a dedicated GC
goroutine. However, much of the real work is done in other goroutines.
These other goroutines generally detect when it is time for a phase
change and must coordinate with the main GC goroutine to effect this
phase change. This coordination delays phase changes and opens windows
where, for example, the mutator can allocate uncontrolled, or nothing
can be accomplished because everything is waiting on the coordinator
to wake up. This has led to bugs like
[#11677](https://golang.org/issue/11677) and
[#11911](https://golang.org/issue/11911). We've tried to mitigate this
by handing control directly to the coordinator goroutine when we wake
it up, but the scheduler isn't designed for this sort of explicit
co-routine scheduling, so this doesn't always work and it's more
likely to fall apart under stress than an explicit design.
## Proposal
We will restructure the garbage collector as an explicit state machine
where any goroutine can effect a state transition. This is primarily
an implementation change, not an algorithm change: for the most part,
these states and the transitions between them closely follow the
current GC algorithm.
Each state is global and determines the GC-related behavior of all
goroutines. Each state also has an exit condition. State transitions
are performed immediately by whatever goroutine detects that the
current state's exit condition is satisfied. Multiple goroutines may
detect an exit condition simultaneously, in which case none of these
goroutines may progress until the transition has been performed. For
many transitions, this is necessary to prevent runaway heap growth.
Each transition has a specific set of steps to prepare for the next
state and the system enters the next state as soon as those steps are
completed. Furthermore, each transition is designed to make the exit
condition that triggers that transition false so that the transition
happens once and only once per cycle.
In principle, all of the goroutines that detect an exit condition
could assist in performing the transition. However, we take a simpler
approach where all transitions are protected by a global *transition
lock* and transitions are designed to perform very little non-STW
work. When a goroutine detects the exit condition, it acquires the
transition lock, re-checks if the exit condition is still true and, if
not, simply releases the lock and continues executing in whatever the
new state is. It is necessary to re-check the condition, rather than
simply check the current state, in case the goroutine is blocked
though an entire GC cycle.
The sequence of states and transitions is as follows:
* **State: Sweep/Off** This is the initial state of the system. No
scanning, marking, or assisting is performed. Mutators perform
proportional sweeping on allocation and background sweeping performs
additional sweeping on idle Ps.
In this state, after allocating, a mutator checks if the heap size
has exceeded the GC trigger size and, if so, it performs concurrent
sweep termination by sweeping any remaining unswept spans (there
shouldn't be any for a heap-triggered transition). Once there are no
unswept spans, it performs the *sweep termination* transition.
Periodic (sysmon-triggered) GC and `runtime.GC` perform these same
steps regardless of the heap size.
* **Transition: Sweep termination and initialization** Acquire
`worldsema`. Start background workers. Stop the world. Perform sweep
termination. Clear sync pools. Initialize GC state and statistics.
Enable write barriers, assists, and background workers. If this is a
concurrent GC, configure root marking, start the world, and enter
*concurrent mark*. If this is a STW GC (`runtime.GC`), continue with
the *mark termination* transition.
* **State: Concurrent mark 1** In this state, background workers
perform concurrent scanning and marking and mutators perform
assists.
Background workers initially participate in root marking and then
switch to draining heap mark work.
Mutators assist with heap marking work in response to allocation
according to the assist ratio established by the GC controller.
In this state, the system keeps an atomic counter of the number of
active jobs, which includes the number of background workers and
assists with checked out work buffers, plus the number of workers in
root marking jobs. If this number drops to zero and
`gcBlackenPromptly` is unset, the worker or assist that dropped it
to zero transitions to *concurrent mark 2*. Note that it's important
that this transition not happen until all root mark jobs are done,
which is why the counter includes this.
Note: Assists could participate in root marking jobs just like
background workers do and accumulate assist credit for this scanning
work. This would particularly help at the beginning of the cycle
when there may be little background credit or queued heap scan work.
This would also help with load balancing. In this case, we would
want to update `scanblock` to track scan credit and modify the scan
work estimate to include roots.
* **Transition: Disable workbuf caching** Disable caching of workbufs
by setting `gcBlackenPromptly`. Queue root mark jobs for globals.
Note: It may also make sense to queue root mark jobs for stacks.
This would require making it possible to re-scan a stack (and extend
existing stack barriers).
* **State: Concurrent mark 2** The goroutine that performed the flush
transition flushes all workbuf caches using `forEachP`. This counts
as an active job to prevent the next transition from happening
before this is done.
Otherwise, this state is identical to *concurrent mark 1*, except
that workbuf caches are disabled.
Because workbuf caches are disabled, if the active workbuf count
drops to zero, there is no more work. When this happens and
`gcBlackenPromptly` is set, the worker or assist that dropped it the
count to zero performs the *mark termination* transition.
* **Transition: Mark termination** Stop the world. Unblock all parked
assists. Perform `gcMark`, checkmark (optionally), `gcSweep`, and
re-mark (optionally). Start the world. Release `worldsema`. Print GC
stats. Free stacks.
Note that `gcMark` itself runs on all Ps, so this process is
parallel even though it happens during a transition.
## Rationale
There are various alternatives to this approach. The most obvious is
to simply continue with what we do now: a central GC coordinator with
hacks to deal with delays in various transitions. This is working
surprisingly well right now, but only as a result of a good deal of
engineering effort (primarily the cascade of fixes on
[#11677](https://github.com/golang/go/issues/11677)) and its fragility
makes it difficult to make further changes to the garbage collector.
Another approach would be make the scheduler treat the GC coordinator
as a high priority goroutine and always schedule it immediately when
it becomes runnable. This would consolidate several of our current
state transition "hacks", which attempt to help out the scheduler.
However, in a concurrent setting it's important to not only run the
coordinator as soon as possible to perform a state transition, but
also to disallow uncontrolled allocation on other threads while this
transition is being performed. Scheduler hacks don't address the
latter problem.
## Compatibility
This change is internal to the Go runtime. It does not change any
user-facing Go APIs, and hence it satisfies Go 1 compatibility.
## Implementation
This change will be implemented by Austin Clements, hopefully in the
Go 1.6 development cycle. Much of the design has already been
prototyped.
Many of the prerequisite changes have already been completed. In
particular, we've already moved most of the non-STW work out of the GC
coordinator ([CL 16059](https://go-review.googlesource.com/#/c/16059/)
and [CL 16070](https://go-review.googlesource.com/#/c/16070/)), made
root marking jobs smaller
([CL 16043](https://go-review.googlesource.com/#/c/16043)), and
improved the synchronization of blocked assists
([CL 15890](https://go-review.googlesource.com/#/c/15890)).
The GC coordinator will be converted to a decentralized state machine
incrementally, one state/transition at a time where possible. At the
end of this, there will be no work left in the GC coordinator and it
will be deleted.
## Open issues
There are devils in the details. One known devil in the current
garbage collector that will affect this design in different ways is
the complex constraints on scheduling within the garbage collector
(and the runtime in general). For example, background workers are
currently not allowed to block, which means they can't stop the world
to perform mark termination. These constraints were designed for the
current coordinator-based system and we will need to find ways of
resolving them in the decentralized design.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/6282-table-data.md | # Proposal: Multi-dimensional slices
Author(s): Brendan Tracey, with input from the gonum team
Last updated: November 17th, 2016
## Abstract
This document proposes a generalization of Go slices from one to multiple
dimensions.
This language change makes slices more naturally suitable for applications such
as image processing, matrix computations, gaming, etc.
Arrays-of-arrays(-of-arrays-of...) are continuous in memory and rectangular but
are not dynamically sized.
Slice-of-slice(-of-slice-of...) are dynamically sized but are not
continuous in memory and do not have a uniform length in each dimension.
The generalized slice described here is an N-dimensional rectangular data
structure with continuous storage and a dynamically sized length and capacity
in each dimension.
This proposal defines slicing, indexing, and assignment, and provides extended
definitions for `make`, `len`, `cap`, `copy` and `range`.
## Nomenclature
This document extends the notion of a slice to include rectangular data.
As such, a multi-dimensional slice is properly referred to as simply a "slice".
When necessary, this document uses 1d-slice to refer to Go slices as they are
today, and nd-slice to refer to a slice in more than one dimension.
## Previous discussions
This document is self-contained, and prior discussions are not necessary for
understanding the proposal.
They are referenced here solely to provide a history of discussion on the subject.
Note that in a previous iteration of this document, an nd-slice was referred to as
a "table", and that many changes have been made since these earlier discussions.
### About this proposal
1. [Issue 6282 -- proposal: spec: multidimensional slices](https://golang.org/issue/6282)
2. [gonum-dev thread:](https://groups.google.com/forum/#!topic/gonum-dev/NW92HV_W_lY%5B1-25%5D)
3. [golang-nuts thread: Proposal to add tables (two-dimensional slices) to go](https://groups.google.com/forum/#!topic/golang-nuts/osTLUEmB5Gk%5B1-25%5D)
4. [golang-dev thread: Table proposal (2-D slices) on go-nuts](https://groups.google.com/forum/#!topic/golang-dev/ec0gPTfz7Ek)
5. [golang-dev thread: Table proposal next steps](https://groups.google.com/forum/#!searchin/golang-dev/proposal$20to$20add$20tables/golang-dev/T2oH4MK5kj8/kOMHPR5YpFEJ)
6. [Robert Griesemer proposal review:](https://go-review.googlesource.com/#/c/24271/) which suggested name change from "tables" to just "slices", and suggested referring to down-slicing as simply indexing.
### Other related threads
- [golang-nuts thread -- Multi-dimensional arrays for Go. It's time](https://groups.google.com/forum/#!topic/golang-nuts/Q7lwBDPmQh4%5B1-25%5D)
- [golang-nuts thread -- Multidimensional slices for Go: a proposal](https://groups.google.com/forum/#!topic/golang-nuts/WwQOuYJm_-s)
- [golang-nuts thread -- Optimizing a classical computation in Go](https://groups.google.com/forum/#!topic/golang-nuts/ScFRRxqHTkY)
- [Issue 13253 -- proposal: spec: strided slices](https://github.com/golang/go/issues/13253) Alternate proposal relating to multi-dimensional slices (closed)
## Background
Go presently lacks multi-dimensional slices.
Multi-dimensional arrays can be constructed, but they have fixed dimensions: a
function that takes a multi-dimensional array of size 3x3 is unable to handle an
array of size 4x4.
Go currently provides slices to allow code to be written for lists of unknown
length, but similar functionality does not exist for multiple dimensions;
slices only work in a single dimension.
One very important concept with this layout is a Matrix.
Matrices are hugely important in many sections of computing.
Several popular languages have been designed with the goal of making matrices
easy (MATLAB, Julia, and to some extent Fortran) and significant effort has been
spent in other languages to make matrix operations fast (Lapack, Intel
MKL, ATLAS, Eigpack, numpy).
Go was designed with speed and concurrency in mind, and so Go should be a great
language for numeric applications, and indeed, scientific programmers are using Go
despite the lack of support from the standard library for scientific computing.
While the gonum project has a [matrix library](https://github.com/gonum/matrix)
that provides a significant amount of functionality, the results are problematic
for reasons discussed below.
As both a developer and a user of the gonum matrix library, I can confidently
say that not only would implementation and maintenance be much easier with this
extension to slices, but also that using matrices would change from being
somewhat of a pain to being enjoyable to use.
The desire for good matrix support is a motivation for this proposal, but
matrices are not synonymous with 2d-slices.
A matrix is composed of real or complex numbers and has well-defined operations
(multiplication, determinant, Cholesky decomposition).
2d-slices, on the other hand, are merely a rectangular data container. Slices can
be of any dimension, hold any data type and do not have any of the additional
semantics of a matrix.
A matrix can be constructed on top of a 2d-slice in an external package.
A rectangular data container can find use throughout the Go ecosystem.
A partial list is
1. Image processing: An image canvas can be represented as a rectangle of colors.
Here the ability to efficiently slice in multiple dimensions is important.
2. Machine learning: Typically feature vectors are represented as a row of a
matrix. Each feature vector has the same length, and so the additional safety of
a full rectangular data structure is useful.
Additionally, many fitting algorithms (such as linear regression) give this
rectangular data the additional semantics of a matrix, so easy interoperability
is very useful.
3. Game development: Go is becoming increasingly popular for the development
of games.
A player-specific section of a two or three dimensional space can be well
represented by an n-dimensional array or a slice of an nd-slice.
Two-dimensional slices are especially well suited for representing the game board
of tile-based games.
Go is a great general-purpose language, and allowing users to slice a
multi-dimensional array will increase the sphere of projects for which Go is ideal.
### Language Workarounds
There are several possible ways to emulate a rectangular data structure, each
with its own downsides.
This section discusses data in two dimensions, but similar problems exist for
higher dimensional data.
#### 1. Slice of slices
Perhaps the most natural way to express a two-dimensional slice in Go is to use
a slice of slices (for example `[][]float64`).
This construction allows convenient accessing and assignment using the
traditional slice access
v := s[i][j]
s[i][j] = v
This representation has two major problems.
First, a slice of slices, on its own, has no guarantees about the size of the
slices in the minor dimension.
Routines must either check that the lengths of the inner slices are all equal,
or assume that the dimensions are equal (and accept possible bounds errors).
This approach is error-prone for the user and unnecessarily burdensome for the
implementer.
In short, a slice of slices represents exactly that; a slice of arbitrary length
slices.
It does not represent data where all of the minor dimension slices are of
equal length.
Secondly, a slice of slices has a significant amount of computational overhead
because accessing an element of a sub-slice means indirecting through a pointer
(the pointer to the slice's underlying array).
Many programs in numerical computing are dominated by the cost of matrix
operations (linear solve, singular value decomposition), and optimizing these
operations is the best way to improve performance.
Likewise, any unnecessary cost is a direct unnecessary slowdown.
On modern machines, pointer-chasing is one of the slowest operations.
At best, the pointer might be in the L1 cache.
Even so, keeping that pointer in the cache increases L1 cache pressure, slowing
down other code.
If the pointer is not in the L1 cache, its retrieval is considerably slower than
address arithmetic; at worst, it might be in main memory, which has a latency on
the order of a hundred times slower than address arithmetic.
Additionally, what would be redundant bounds checks in a true 2d-slice are
necessary in a slice of slice as each slice could have a different length, and
some common operations like 2-d slicing are expensive on a slice of slices but
are cheap in other representations.
#### 2. Single slice
A second representation option is to contain the data in a single slice, and
maintain auxiliary variables for the size of the 2d-slice.
The main benefit of this approach is speed.
A single slice avoids some of the cache and index bounds concerns listed above.
However, this approach has several major downfalls.
The auxiliary size variables must be managed by hand and passed between
different routines.
Every access requires hand-writing the data access multiplication as well as hand
-written bounds checking (Go ensures that data is not accessed beyond the slice,
but not that the row and column bounds are respected).
Furthermore, it is not clear from the data representation whether the 2d-slice
is to be accessed in "row major" or "column major" format
v := a[i*stride + j] // Row major a[i,j]
v := a[i + j*stride] // Column major a[i,j]
In order to correctly and safely represent a slice-backed rectangular structure,
one needs four auxiliary variables: the number of rows, number of columns, the
stride, and also the ordering of the data since there is currently no "standard"
choice for data ordering.
A community accepted ordering for this data structure would significantly ease
package writing and improve package inter-operation, but relying on library
writers to follow unenforced convention is a recipe for confusion and incorrect
code.
#### 3. Struct type
A third approach is to create a struct data type containing a data slice and all
of the data access information.
The data is then accessed through method calls.
This is the approach used by [go.matrix](https://github.com/skelterjohn/go.matrix)
and gonum/matrix.
The struct representation contains the information required for single-slice
based access, but disallows direct access to the data slice.
Instead, method calls are used to access and assign values.
type Dense struct {
stride int
rows int
cols int
data []float64
}
func (d *Dense) At(i, j int) float64 {
if uint(i) >= uint(d.rows) {
panic("rows out of bounds")
}
if uint(j) >= uint(d.cols) {
panic("cols out of bounds")
}
return d.data[i*d.stride+j]
}
func (d *Dense) Set(i, j int, v float64) {
if uint(i) >= uint(d.rows) {
panic("rows out of bounds")
}
if uint(j) >= uint(d.cols) {
panic("cols out of bounds")
}
d.data[i*d.stride+j] = v
}
From the user's perspective:
v := m.At(i, j)
m.Set(i, j, v)
The major benefits to this approach are that the data are encapsulated correctly
-- the data are presented as a rectangle, and panics occur when either dimension
is accessed out of bounds -- and that the defining package can efficiently implement
common operations (multiplication, linear solve, etc.) since it can access the
data directly.
This representation, however, suffers from legibility issues.
The At and Set methods when used in simple expressions are not too bad; they are
a couple of extra characters, but the behavior is still clear.
Legibility starts to erode, however, when used in more complicated expressions
// Set the third column of a matrix to have a uniform random value
for i := 0; i < nCols; i++ {
m.Set(i, 2, (bounds[1] - bounds[0])*rand.Float64() + bounds[0])
}
// Perform a matrix add-multiply, c += a .* b (.* representing element-
// wise multiplication)
for i := 0; i < nRows; i++ {
for j := 0; j < nCols; j++{
c.Set(i, j, c.At(i,j) + a.At(i,j) * b.At(i,j))
}
}
The above code segments are much clearer when written as an expression and
assignment
// Set the third column of a matrix to have a uniform random value
for i := 0; i < nRows; i++ {
m[i,2] = (bounds[1] - bounds[0]) * rand.Float64() + bounds[0]
}
// Perform a matrix add-multiply, c += a .* b
for i := 0; i < nRows; i++ {
for j := 0; j < nCols; j++{
c[i,j] += a[i,j] * b[i,j]
}
}
As will be discussed below, this representation also requires a significant API
surface to enable performance for code outside the defining package.
### Performance
This section discusses the relative performance of the approaches.
#### 1. Slice of slice
The slice of slices approach, as discussed above, has fundamental performance
limitations due to data non-locality.
It requires `n` pointer indirections to get to an element of an `n`-dimensional
slice, while in a multi-dimensional slice it only requires one.
#### 2. Single slice
The single-slice implementation, in theory, has performance identical to
generalized slices.
In practice, the details depend on the specifics of the implementation.
Bounds checking can be a significant portion of runtime for index-heavy code,
and a lot of effort has gone to removing redundant bounds checks in the SSA
compiler.
These checks can be proved redundant for both nd-slices and the single slice
representation, and there is no fundamental performance difference in theory.
In practice, for the single slice representation the compiler needs to prove that
the combination `i*stride + j` is in bounds, while for an nd-slice the compiler
just needs to prove that `i` and `j` are individually within bounds (since the
compiler knows it maintains the correct stride).
Both are feasible, but the latter is simpler, especially with the proposed
extensions to range.
#### 3. Struct type
The performance story for the struct type is more complicated.
Code within the implementing package can access the slice directly, and so the
discussion is identical to the above.
A user-implemented multi-dimensional slice based on a struct can be made as
efficient as the single slice representation, but it requires more than the
simple methods suggested above.
The code for the benchmarks can be found [here](https://play.golang.org/p/yx6ODaIqPl).
The "(BenchmarkXxx)" parenthetical below refer to these benchmarks.
All benchmarks were performed using Go 1.7.3.
A table at the end summarizes the results.
The example for performance comparison will be the function `C += A*B^T`.
This is a simpler version of the "General Matrix Multiply" at the core of many
numerical routines.
First consider a single-slice implementation (BenchmarkNaiveSlices), which
will be similar to the optimal performance.
// Compute C += A*B^T, where C is an m×n matrix, A is an m×k matrix, and B
// is an n×k matrix
func MulTrans(m, n, k int, a, b, c []float64, lda, ldb, ldc int){
for i := 0; i < m; i++ {
for j := 0; j < n; j++ {
var t float64
for l := 0; l < k; l++ {
t += a[i*lda+l] * b[j*lda+l]
}
c[i*ldc+j] += t
}
}
}
We can add an "AddSet" method (BenchmarkAddSet), and translate the above code
into the struct representation.
// Compute C += A*B^T, where C is an m×n matrix, A is an m×k matrix, and B
// is an n×k matrix
func MulTrans(A, B, C Dense) {
for i := 0; i < m; i++ {
for j := 0; j < n; j++ {
var t float64
for l := 0; l < k; l++ {
t += A.At(i, l) * B.At(j, l)
}
C.AddSet(i, j, t)
}
}
}
This translation is 500% slower, a very significant cost.
The reason for this significant penalty is that the Go compiler does not
currently inline methods that can panic, and the accessors contain panic calls
as part of the manual index bounds checks.
The next benchmark simulates a compiler with this restriction removed (BenchmarkAddSetNP)
by replacing the `panic` calls in the accessor methods with setting the first
data element to NaN (this is not good code, but it means the current Go compiler
can inline the method calls and the bounds checks still affect program execution
and so cannot be trivially removed).
This significantly decreases the running time, reducing the gap from 500% to only 35%.
The final cause of the performance gap is bounds checking.
The benchmark is modified so the bounds checks are removed, simulating a compiler
with better proving capability than the current compiler.
Further, the benchmark is run with `-gcflags=-B` (BenchmarkAddSetNB).
This closes the performance gap entirely (and also improves the single slice
implementation by 15%).
However, the initial single slice implementation can be significantly improved
as follows (BenchmarkSliceOpt).
for i := 0; i < m; i++ {
as := a[i*lda : i*lda+k]
cs := c[i*ldc : i*ldc+n]
for j := 0; j < n; j++ {
bs := b[j*lda : j*lda+k]
var t float64
for l, v := range as {
t += v * bs[l]
}
cs[j] += t
}
}
This reduces the cost by another 40% on top of the bounds check removal.
Similar performance using a struct representation can be achieved with a
"RowView" method (BenchmarkDenseOpt)
func (d *Dense) RowView(i int) []float64 {
if uint(i) >= uint(d.rows) {
panic("rows out of bounds")
}
return d.data[i*d.stride : i*d.stride+d.cols]
}
This again closes the gap with the single slice representation.
The conclusion is that the struct representation can eventually be as efficient
as the single slice representation.
Bridging the gap requires a compiler with better inlining ability and superior
bounds checking elimination.
On top of a better compiler, a suite of methods are needed on Dense to support
efficient operations.
The RowView method let range be used, and the "operator methods" (AddSet, AtSet
SubSet, MulSet, etc.) reduce the number of accesses.
Compare the final implementation using a struct
for i := 0; i < m; i++ {
as := A.RowView(i)
cs := C.RowView(i)
for j := 0; j < n; j++ {
bs := b[j*lda:]
var t float64
for l, v := range as {
t += v * bs[l]
}
cs[j] += t
}
}
with that of the nd-slice implementation using the syntax proposed here
for i, as := range a {
cs := c[i]
for j, bs := range b {
var t float64
for l, v := range as {
t += v * bs[l]
}
}
cs[j] += t
}
The indexing performed by RowView happens safely and automatically using range.
There is no need for the "OpSet" methods since they are automatic with slices.
Compiler optimizations are less necessary as the operations are already inlined,
and range eliminated most of the bounds checks.
Perhaps most importantly, the code snippet above is the most natural way to code
the function using nd-slices, and it is also the most efficient way to code it.
Efficient code is a consequence of good code when nd-slices are available.
| Benchmark | MulTrans (ms) |
| -------------------------- | :-----------: |
| Naive slice | 41.0 |
| Struct + AddSet | 207 |
| Struct + Inline | 56.0 |
| Slice + No Bounds (NB) | 34.9 |
| Struct + Inline + NB | 34.1 |
| Slice + NB + Subslice (SS) | 21.6 |
| Struct + Inilne + NB + SS | 20.6 |
### Recap
The following table summarizes the current state of affairs with 2d data in go
| | Correct Representation | Access/Assignment Convenience | Speed |
| -------------: | :--------------------: | :---------------------------: | :---: |
| Slice of slice | X | ✓ | X |
| Single slice | X | X | ✓ |
| Struct type | ✓ | X | X |
In general, we would like our codes to be
1. Easy to use
2. Not error-prone
3. Performant
At present, an author of numerical code must choose *one*.
The relative importance of these priorities will be application-specific, which
will make it hard to establish one common representation.
This lack of consistency will make it hard for packages to inter-operate.
Improvements to the compiler will reduce the performance penalty for using the
correct representation, but even then many methods are required to achieve optimal
performance.
A language built-in meets all three goals, enabling code that is
simultaneously clearer and more efficient.
Generalized slices allow gophers to write simple, fast, and correct numerical
and graphics code.
## Proposal
The proposed changes are described first here in the Proposal section.
The rationale for the specific design choices is discussed afterward in the
Discussion section.
### Syntax
Just as `[]T` is shorthand for a slice, `[,]T` is shorthand for a two-dimensional
slice, `[,,]T` a three-dimensional slice, etc.
### Allocation
A slice may be constructed either using the make built-in or via a literal.
The elements are guaranteed to be stored in a continuous slice, and are
guaranteed to be stored in "row-major" order.
Specifically, for a 2d-slice, the underlying data slice first contains all
elements in the first row, followed by all elements in the second row, etc.
Thus, the 5x3 table
00 01 02 03 04
10 11 12 13 14
20 21 22 23 24
is stored as
[00, 01, 02, 03, 04, 10, 11, 12, 13, 14, 20, 21, 22, 23, 24]
Similarly, for a 3d-slice with lengths m, n, and p, the data is arranged as
[t111, t112, ... t11p, t121, ..., t12n, ... t211 ... , t2np, ... tmnp]
#### Making a multi-dimensional slice
A new N-dimensional slice (of generic type) may be allocated by using the make
command with a mandatory argument of a [N]int specifying the length in each
dimension, followed by an optional [N]int specifying the capacity in each
dimension.
If the capacity argument is not present, each capacity is defaulted to its
respective length argument.
These act like the length and capacity for slices, but on a per-dimension basis.
The slice will be filled with the zero value of the type
s := make([,]T, [2]int{m, n}, [2]int{maxm, maxn})
t := make([,]T, [...]int{m, n})
s2 := make([,,]T, [...]int{m, n, p}, [...]int{maxm, maxn, maxp})
t2 := make([,,]T, [3]int{m, n, p})
Calling make with a zero length or capacity is allowed, and is equivalent to
creating an equivalently sized multi-dimensional array and slicing it
(described fully below).
In the following code
u := make([,,,]float32, [4]int{0, 6, 4, 0})
v := [0][6][4][0]float32{}
w := v[0:0, 0:6, 0:4, 0:0]
u and w both have lengths and capacities of [4]int{0, 6, 4, 0), and the
underlying data slice has 0 elements.
#### Slice literals
A slice literal can be constructed using nested braces
u := [,]T{{x, y, z}, {a, b, c}}
v := [,,]T{{{1, 2, 3, 4}, {5, 6, 7, 8}}, {{9, 10, 11, 12}, {13, 14, 15, 16}}}
The size of the slice will depend on the size of the brace sets, outside in.
For example, in a 2d-slice the number of rows is equal to the number of sets of
braces, and the number of columns is equal to the number of elements within
each set of braces.
In a 3d-slice, the length of the first dimension is the number of sets of brace
sets, etc.
Above, u has length [2, 3], and v has length [2, 2, 4].
It is a compile-time error if each element in a brace layer does not contain the
same number of elements.
Like normal slices and arrays, key-element literal construction is allowed.
For example, the two following constructions yield the same result
[,]int{{0:1, 2:0},{1:1, 2:0}, {2:1}}
[,]int{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
### Slicing
Slicing occurs by using the normal 2 or 3 index slicing rules in each dimension,
`i:j` or `i:j:k`.
The same panic rules as 1d-slices apply (`0 <= i <= j <= k <= capacity in that dim`).
Like slices, this updates the length and capacity in the respective dimensions
a := make([,]int, [2]int{10, 2}, [2]int{10, 15})
b := a[1:3, 3:5:6]
A multi-dimensional array may be sliced to create an nd-slice.
In
var array [8][5]int
b := array[2:6, 3:5]
`b` is a slice with lengths 4 and 2, capacities 6 and 2, and a stride of 5.
Represented graphically, the original `var a [8][5]int` is
00 01 02 03 04
10 11 12 13 14
20 21 22 23 24
30 31 32 33 34
40 41 42 43 44
50 51 52 53 54
60 61 62 63 64
70 71 72 73 74
After slicing, with `b := a[2:6, 3:5]`
-- -- -- -- --
-- -- -- -- --
-- -- -- 23 24
-- -- -- 33 34
-- -- -- 43 44
-- -- -- 53 54
-- -- -- -- --
-- -- -- -- --
where the numbered elements are those still visible to the slice.
The underlying data slice is
[23 24 -- -- -- 33 34 -- -- -- 43 44 -- -- -- 53 54]
### Indexing
The simplest form of an index expression specifies a single integer index for
the left-most (outer-most) dimension of a slice, followed by 3-value (min, max,
cap) slice expressions for each of the inner dimensions.
This operation returns a slice with one dimension removed.
The returned slice shares underlying with the original slice, but with a new
offset and updated lengths and capacities.
As a shorthand, multiple indexing expressions may be combined into one.
That is `t[1,2,:]` is equivalent to `t[1,:,:][2,:]`, and , `t[5,4,1,2:4]` is equivalent
to `t[5,:,:,:][4,:,:][1,:][2:4]`
It follows that specifying all of the indices gets a single element of the slice.
An important consequence of the indexing rules is that the "indexed dimensions"
must be the leftmost ones, and the "sliced dimensions" must be the rightmost ones.
Examples:
Continuing the example above, `b[1,:]` returns the slice []int{33, 34}.
Other example statements:
v := s[3,:] // v has type []T
v := s[0, 1:3, 1:4:5] // v has type [,]T
v := s[:,:,2] // Compile error: specified dimension must be the leftmost
v := s[5] // v has type T
v := s[1,:,:][2,:][0] // v has type T
v := s[1,2,0] // v has type T
### Assignment
Assignment acts as it does in Go today.
The statement `s[i] = x` puts the value `x` into position `i` of the slice.
An index operation can be combined with an assignment operation to assign to a
higher-dimensional slice.
s[1,:,:][0,:][2] = x
For convenience, the slicing and access expressions can be elided.
s[1,0,2] = x // equivalent to the above
If any index is negative or if it is greater than or equal to the length
in that dimension, a runtime panic occurs.
Other combination operators are valid (assuming the slice is of correct type)
t := make([,]float64, [2]int{2,3})
t[1,2] = 6
t[1,2] *= 2 // Now contains 12
t[3,3] = 4 // Runtime panic, out of bounds (possiby a compile-time error
// if all indices are constants)
### Reshaping
A new built-in `reshape` allows the data in a 1d slice to be re-interpreted as a
higher dimensional slice in constant time.
The pseudo-signature is `func reshape(s []T, [N]int) [,...]T` where `N`
is an integer greater than one, and [,...]T is a slice of dimension `N`.
The returned slice shares the same underlying data as the input slice, and is
interpreted in the layout discussed in the "Allocation" section.
The product of the elements in the `[N]int` must be less than the length of the
input slice or a run-time panic will occur.
s := []float64{0, 1, 2, 3, 4, 5, 6, 7}
t := reshape(s, [2]int{4,2})
fmt.Println(t[2,0]) // prints 4
t[1,0] = -2
t2 := reshape(s, [...]int{2,2,2})
fmt.Println(t3[0,1,0]) // prints -2
t3 := reshape(s, [...]int{2,2,2,2}) // runtime panic: reshape length mismatch
### Unpack
A new built-in `unpack` returns the underlying data slice and strides from a
higher dimensional slice.
The pseudo-signature is `func unpack(s [,...]T) ([]T, [N-1]int)`, where
[,...T] is a slice of dimension `N > 1`.
The returned array is the strides of the table.
The returned slice has the same underlying data as the input table.
The first element of the returned slice is the first accessible element of the
slice (element 0 in the underlying data), and the last element of the
returned slice is the last accessible element of the table.
For example, in a 2d-slice the end of the returned slice is element
`stride*len(s)[0]+len(s)[1]`
t := [,]float64{{1,0,0},{0,1,0},{0,0,1}}
t2 := t[:2,:2]
s, stride := unpack(t2)
fmt.Println(stride) // prints 3
fmt.Println(s) // prints [1 0 0 0 1 0 0 0]
s[2] = 6
fmt.Println(t[0,2]) // prints 6
### Length / Capacity
Like slices, the `len` and `cap` built-in functions can be used on slices of
higher dimension.
Len and cap take in a slice and return a [N]int representing the lengths/
capacities in the dimensions of the slice.
If the slice is one-dimensional, an `int` is returned, not a `[1]int`.
lengths := len(t) // lengths is a [2]int
nRows := len(t)[0]
nCols := len(t)[1]
maxElems := cap(t)[0] * cap(t)[1]
### Copy
The built-in `copy` will be changed to allow two slices of equal dimension.
Copy returns an `[N]int` specifying the number of elements that were copied in each
dimension.
For a 1d-slice, an `int` will be returned instead of a `[1]int`.
n := copy(dst, src) // n is a [N]int
Copy will copy all of the elements in the sub-slice from the first dimension to
`min(len(dst)[0], len(src)[0])` the second dimension to
`min(len(dst)[1], len(src)[1])`, etc.
dst := make([,]int, [2]int{6, 8})
src := make([,]int, [2]int{5, 10})
n := copy(dst, src) // n == [2]int{5, 8}
fmt.Println("All destination elements were overwritten:", n == len(dst))
Indexing can be used to copy data between slices of different dimension.
s := []int{0, 0, 0, 0, 0}
t := [,]int{{1,2,3}, {4,5,6}, {7,8,9}, {10,11,12}}
copy(s, t[1,:]) // Copies all the whole second row of the slice
fmt.Println(s) // prints [4 5 6 0 0]
copy(t[2,:], t[1,:]) // copies the second row into the third row
### Range
A range statement loops over the outermost dimension of a slice.
The "value" on the left hand side is the `n-1` dimensional slice with the first
element indexed.
That is,
for i, v := range s {
}
is identical to
for i := 0; i < len(s)[0]; i++ {
v := s[i,:, ...]
}
for multi-dimensional slices (and i < len(s) for one-dimensional ones).
#### Examples
Two-dimensional slices.
// Sum the rows of a 2d-slice
rowsum := make([]int, len(t)[0])
for i, s = range t{
for _, v = range s{
rowsum[i] += v
}
}
// Sum the columns of a 2d-slice
colsum := make([]int, len(t)[1])
for i := range colsum {
for j, v := range t[i, :]{
colsum[j] += v
}
}
// Matrix-matrix multiply (given existing slices a and b)
c := make([,]float64, len(a)[0], len(b)[1])
for i, sa := range a {
for k, va := range sa {
for j, vb := range b[k,:] {
c[i,j] += va * vb
}
}
}
Higher-dimensional slices
t3 := [,,]int{{{1, 2, 3, 4}, {5, 6, 7, 8}}, {{9, 10, 11, 12}, {13, 14, 15, 16}}}
for i, t2 := range t3 {
fmt.Println(i, t2) // i ranges from 0 to 1, v is a [,]int
}
for j, s := range t3[1,:,:] {
fmt.Println(i, s) // j ranges from 0 to 1, s is a []int
}
for k, v := range t[1,0,:] {
fmt.Println(i, v) // k ranges from 0 to 3, v is an int
}
// Sum all of the elements
var sum int
for _, t2 := range t3 {
for _, s := range t2 {
for _, v := range s {
sum += v
}
}
}
### Reflect
Package reflect will have additions to support generalized slices.
In particular, enough will be added to enable calling C libraries with 2d-slice
data, as there is a large body of C libraries for numerical and graphics work.
Eventually, it will probably be desirable for reflect to add functions to
support multidimensional slices (MakeSliceN, SliceNOf, SliceN, etc.).
The exact signatures of these methods can be decided upon at a later date.
## Discussion
This section describes the rationale for the design choices made above, and
contrasts them with possible alternatives.
### Data Layout
Programming languages differ on the choice of row-major or column-major layout.
In Go, row-major ordering is forced by the existing semantics of arrays-of-arrays.
Furthermore, having a specific layout is more important than the exact choice so
that code authors can reason about data layout for optimal performance.
### Discussion -- Reshape
There are several use cases for reshaping, as discussed in the
[strided slices proposal](https://github.com/golang/go/issues/13253).
However, reshaping slices of arbitrary dimension (as proposed in the previous link)
does not compose with slicing (discussed more below).
This proposal allows for the common use case of transforming between linear and
multi-dimensional data while still allowing for slicing in the normal way.
The biggest question is if the input slice to reshape should be exactly as large
as necessary, or if it only needs to be "long enough".
The "long enough" behavior saves a slicing operation, and seems to better match
the behavior of `copy`.
Another possible syntax for reshape is discussed in
[issue 395](https://github.com/golang/go/issues/395).
Instead of a new built-in, one could use `t := s.([m1,m2,...,mn]T)`, where s
is of type `[]T`, and the returned type is `[,...]T` with
`len(t) == [n]int{m1, m2, ..., mn}`.
As discussed in #395, the `.()` syntax is typically reserved for type assertions.
This isn't strictly overloaded, since []T is not an interface, but it could be
confusing to have similar syntax represent similar ideas.
The difference between s.([,]T) and s.([m,n]T) may be too large for how similar
the expressions appear -- the first asserts that the value stored in the
interface `s` is a [,]T, while the second reshapes a `[]T` into a `[,]T` with
lengths equal to `m` and `n`.
A built-in function avoids these subtleties, and better matches the proposed
`unpack` built-in.
### Discussion -- Unpack
Like `reshape`, `unpack` is useful for manipulating slices in higher dimensions.
One major use-case is the allowing copy-free manipulation of data in a slice.
For example,
// Strided presents data as a `strided slice`, where elements are not
// contiguous in memory.
type Strided struct {
data []T
len int
stride int
}
func (s Strided) At(i int) T {
return data[i*stride]
}
func GetCol(s [,]T, i int) Strided {
data, stride := unpack(s)
return Strided {data[i:], stride}
}
See the indexing discussion section for more uses of this type.
Unpack is also necessary to pass slices to C code (and others) without copying data.
Using the `len` and `unpack` built-in functions provides enough information to
make such a call.
An example function is `Dgeqrf` which computes the QR factorization of a matrix.
The C signature is (roughly)
dgeqrf(int m, int n, double* a, int lda)
A Go-wrapper to this function could be implemented as
// Dgeqrf computes the QR factorization in-place using a call through cgo to LAPACK_e
func Dgeqrf(d Dense) {
l := len(d)
data, stride := unpack(d)
C.dgeqrf((C.int)(l[0]), (C.int)(l[1]), (*C.double)(&data[0]), (C.int)(stride))
}
Such a wrapper is impossible without unpack, as it is otherwise impossible to
extract the underlying []float64 and strides without using unsafe.
Finally, `unpack` allows users to reshape higher-dimensional slices between one
another.
The user must check that the slice has not been viewed for this operation to
have the expected behavior.
// Reshape23 reshapes a 2d slice into a 3d slice of the specified size. The
// major dimension of a must not have been sliced
func Reshape23(a [,]int, sz [3]int) [,,]int {
data, stride := unpack(a)
if stride != len(a)[0]{
panic("a has been viewed")
}
return reshape(data, sz)
}
### Indexing
A controversial aspect of the proposal is that indexing is asymmetric.
That is, an index expression has to be the left-most element
t[1,:,:] // allowed
t[:,:,1] // not allowed
The second expression is disallowed in this proposal so that the rightmost
(innermost) dimension always has a stride of 1 to match existing 1d-slice
semantics.
A proposal that enables symmetric indexing, such as `t[0,:,1]`, requires the
returned 1d object to contain a stride.
This is incompatible with Go slices today, but perhaps there could be a better
proposal nevertheless.
Let us examine possible alternatives.
It seems any proposal must address this issue in one of the following ways.
0. Accept the asymmetry (this proposal)
1. Do not add a higher-dimensional rectangular structure (Go today)
2. Disallow asymmetry by forbidding indexing
3. Modify the implementation of current Go slices to be strided
4. Add "strided slices" as a distinct type in the language
Option 1: This proposal, of course, feels that a multi-dimensional rectangular
structure is a good addition to the language (see Background section).
While multi-dimensional slices add some complexity to the language, this proposal
is an natural extension to slice semantics.
There are very few new rules to learn once the basics of slices are understood.
The generalization of slices decreases the complexity of many specific algorithms,
and so this proposal believes Go is improved on the whole with this generalization.
Option 2: One alternative is to keep the generalization of slices
proposed here, but eliminate asymmetry by disallowing indexing in all
dimensions, even the leftmost.
Under this kind of proposal, accessing a specific element of a slice is allowed
v := s[0,1,2]
but not selecting a full sub-slice
v := s[0,:,:]
While this is possible, it eliminates two major benefits of the indexing behavior
proposed here.
First, indexing allows for copy-free passing of data subsections to algorithms that
require a lower-dimensional slice.
For example,
func mean(s []float64) float64 {
var m float64
for _, v := range s {
m += v
}
return m / float64(len(s))
}
func means(t [,]float64) []float64 {
m := make([]float64, len(t)[0]) // syntax discussed below
for i := range m {
m[i] = mean(t[i,:])
}
return m
}
Second, indexing, as specified, provides a very clear definition of `range` on
slices.
Without generalized indexing, it is unclear how `range` should behave or what the
syntax should be.
These benefits seem sufficient to include indexing in a proposal.
Option 3: Perhaps instead of generalizing Go slices as they are today, we should
change the implementation of 1d slices to be strided.
This would of course have to wait for Go 2, but a multi-dimensinal slice would
then naturally have `N` strides instead of `N-1`, and indexing can happen along
any dimension.
It seems that this change would not be beneficial.
First of all, there is a lot of code which relies on the assumption that slices
are contiguous.
All of this code would need to be re-written if the implementation of slices
were modified.
More importantly, it's not clear that the basic operation of accessing a strided
slice could be made as efficient as accessing a contiguous slice, since a strided
slice requires an additional multiplication by the stride.
Additionally, contiguous data makes optimization such as SIMD much easier.
Increasing the cost of all Go programs just to allow generalized indexing does
not seem like a good trade, and even programs that do use indexing may be slower
overall because of these extra costs.
It seems that having a linear data structure that is guaranteed to be contiguous
is very useful for compatibility and efficiency reasons.
Option 4: The last possibility is to abandon the idea of Go slices as the 1d case,
and instead build a proposal around a "strided slice" type.
In such a proposal, a "strided slice" is another 1d data structure in Go, that is
like a slice, except the data is strided rather than contiguous.
Here we will refer to such a type as `[:]T`.
Higher dimensional slices would really be higher dimensional strided slices,
`[::]T`, `[::::]T`, containing `N` strides rather than `N-1`.
This allows for indexing in any dimension, for example `t[:,1]` would return a
`[:]T`.
The syntactic sugar is clearly nice, but do the benefits outweigh the costs?
The benefit of such a type is to allow copy-free access to a column.
However, as stated in the `unpack` discussion section, it is already possible to
get access along a single column by implementing a Strided-like type.
Such a type could be (and is) implemented in a matrix library, for example
type Vector struct {
data []float64
len int
stride int
}
type Dense [,]float64
// ColView returns a Vector whose elements are the i^th column of the receiver
func (d Dense) ColView(i) Vector {
s, stride := unpack(d)
return Vector{
data: s,
len: len(d)[1],
stride: stride,
}
}
// Trace returns a Vector whose elements are the trace of the receiver
func (d Dense) Trace() Vector {
s, stride := unpack(d)
return Vector{
data: s,
len: len(d)[0],
stride: stride+1,
}
}
The `Vector` type can be used to construct higher-level functions.
// Dot computes the dot product of two vectors
func Dot(a, b Vector) float64 {
if a.Len() != b.Len() {
panic("vector length mismatch")
}
dot := a.At(0) * b.At(0)
for i := 1; i < a.Len(); i++ {
dot += a.At(i) * b.At(i)
}
return dot
}
// Mul returns the multiplication of matrices a and b.
func Mul(a, b Dense) Dense {
c := make(Dense, [2]int{len(a)[0], len(b)[1]})
for i := range c {
for j := range c[0]{
c[i,j] = Dot(a.RowView(i), b.ColView(j))
}
}
return c
}
Thus, we can see that most of the behavior in strided slices is implementable
under the current proposal.
It seems that Vector has costs relative to traditional Go slices: indexing is
more expensive, and it is not immediately obvious where an API should use
`Vector` and where an API should use `[]float64`.
While these costs are real, these costs are also present with strided slices.
There are remaining benefits to a built-in strided slice type, but they are
mostly syntax.
It's easier to write `s[:,0]` than use a strided type, Go doesn't have generics
requiring a separate Strided type for each `T`, and range could not work on a
Strided type.
It's also likely easier to implement bounds checking elimination when the compiler
fully controls the data.
These benefits are not insignificant, but there are also costs in adding a `[:]T`
to the language.
A major cost is the plain addition of a new generic type.
Go is built on implementing a small set of orthogonal features that compose together
cleanly.
Strided slices are far from orthogonal with Go slices; they have almost exactly
the same function in the language.
Beyond that, the benefits to slices seem to be tempered by other consequences
of their implementation.
One argument for strided slices is to eliminate the cognitive dissonance in being
able to slice in one dimension but not another.
But, we also have to consider the cognitive complexity of additional language
features, and their interactions with built-in types.
Strided slices are almost identical to Go slices, but with small incompatibilites.
A user would have to learn the interactions between `[]T` and `[:]T` in terms of
assignability and/or conversions, the behavior of `copy`, `append`, etc.
Learning all of these rules is likely more difficult than learning that indexing
is asymmetric.
Finally, while strided slices arguably reduce the costs to column viewing, they
increase the costs in other areas like C interoperability.
Tools like LAPACK only allow matrices with an inner stride of 1, so strided slice
data would need extra allocation and copying before calls to Lapack, potentially
limiting some of the savings.
It seems the costs of a strided-slice built-in type outweigh their benefits,
especially in the presence of relatively easy language workarounds under the
current proposal.
It thus seems that even if we were designing Go from scratch today, we would still
want the proposed behavior here, where we accept the limitations of asymmetric
indexing to keep a smaller, more orthogonal language.
### Use of [N]int for Predefined Functions
This document proposes that `make`, `len`, `copy`, etc. accept and return `[N]int`.
This section describes possible alternatives, and defends this choice.
For the `len` built-in, it seems like there are four possible choices.
1. `lengths := len(t) // returns [N]int` (this proposal)
2. `length := len(t, 0) // returns the length of the slice along the first dimension`
3. `len(t[0,:]) or len(t[ ,:]) // returns the length along the second dimension`
4. `m, n, p, ... := len(t)`
The main uses of `len` either require a specific length from a slice (as in a
for statement), or getting all of the lengths of a slice (size comparison).
We would thus like to make both operations easy.
Option 3 can be ruled out immediately, as it require special parsing syntax to
account for zero-length dimensions.
For example, the expression `len(t[0,:])` blows up if the table has length 0
in the first dimension (and how else would you know the length except with `len`?.
Option 2 seems strictly inferior to option 1.
Getting an individual length is almost exactly the same in both cases, compare
`len(t)[1]` and `len(t,1)`, except getting all of the sizes is much harder in
option 1.
This leaves options 1 and 4.
They both return all lengths, and it is easy to use a specific length.
However, option 1 seems easier to work with in several ways.
The full lengths of slices are much easier to compare:
if len(s) != len(t){...}
vs.
ms, ns := len(s)
mt, nt := len(t)
if ms != mt || ns != nt {...}
It is also easier to compare a specific dimension
if len(s)[0] != len(t)[0]{...}
vs
ms, _ := len(s)
mt, _ := len(t)
if ms != mt {...}
Option 1 is also easier in a for loop
for j := 0; j < len(s)[1]; j++ {...}
vs.
_, n := len(s)
for j := 0; j < n; j++ {...}
All of the examples above are in two-dimensions, which is arguably the best case
scenario for option 4.
Option 1 scales as the dimensions get higher, while option 4 does not.
Comparing a single length for a [,,,]T we see
if len(s)[1] != len(t)[1]
vs.
_, ns, _, _ := len(s)
_, nt, _, _ := len(t)
if ns != nt {...}
Comparing all lengths is much worse.
Based on `len` alone, it seems that option 1 is much worse.
Let us look at the interactions with the other predeclared functions.
First of all, it seems clear that the predeclared functions should all use
similar syntax, if possible.
If option 1 is used, then `make` should accept `[N]int`, and copy should return
an `[N]int`, while if option 4 is used `make` should accept individual arguments
as in
make([,]T, len1, len2, cap1, cap2)
and `copy` should return individual arguments
m,n := copy(s,t)
The simplest case of using `make` with known dimensions seems slightly better
for option 4.
make([,]T, m, n)
is nicer than
make([,]T, [2]int{m,n})
However, this seems like the only case where it is nicer.
If `m` and `n` are coming as arguments in a function, it may frequently be easier
to pass a `[2]int`, at which point option 4 forces
make([,]T, lens[0], lens[1])
It is debatable in two dimensions, but in higher dimensions it seems clear that
passing a `[5]int` is easier than 5 individual dimensions, at which point
make([,,,,]T, lens)
is much easier than
make([,,,,]T, lens[0], lens[1], lens[2], lens[3], lens[4])
There are other common operations we should consider.
Making the same size slice is much easier under option 1
make([,]T, len(s))
vs.
m, n := len(s)
make([,]T, m, n)
Or consider making the receiver for a matrix multiplication
l := [2]int{len(a)[0], len(b)[1]}
c := make([,]T, l)
vs.
m, _ := len(a)
_, n := len(b)
c := make([,]T, m, n)
Finally, compare a grow-like operation.
l := len(a)
l[0] *= 2
l[1] *= 2
b := make([,]T, l)
vs.
m, n := len(a)
m *= 2
n *= 2
c := make([,]T, m, n)
The only example where option 4 is significantly better than option 1 is using
`make` with variables that already exist individually.
In all other cases, option 1 is at least as good as option 4, and in many cases
option 1 is significantly nicer.
It seems option 1 is preferable overall.
### Range
This behavior is a natural extension to the idea of range as looping over a
linear index.
Other ideas were considered, but all are significantly more complicated.
For instance, in a previous iteration of this draft, new syntax was introduced
for range clauses.
An alternate possibility is that range should loop over all elements of the slice,
not just the major dimension.
First of all, it not clear what the "index" portion of the clause should be.
If an `[N]int` is returned, as for the predeclared functions, it seems annoying
to use.
// Apply a linear transformation to each element.
for i, v := range t {
t[i[0],i[1],i[2]] = 3*v + 4
}
Perhaps each index should be individually returned, as in
for i, j, k, v := range t {
t[i,j,k] = 3*v + 4
}
which seems okay, but it could be hard to tell if the final value is an index or
an element.
The bigger problem is that this definition of range means that it is required
to write a for-loop to index over the major dimension, an extremely common
operation.
The proposed range syntax enables ranging over all elements (using multiple range
statements), and makes ranging over the major dimension easy.
## Compatibility
This change is fully backward compatible with the Go1 spec.
## Implementation
A slice can be implemented in Go with the following data structure
type Slice struct {
Data uintptr
Len [N]int
Cap [N]int
Stride [N-1]int
}
As special cases, the 1d-slice representation would be as now, and a 2d-slice
would have the `Stride` field as an `int` instead of a `[1]int`.
Access and assignment can be performed using the strides.
For a two-dimensional slice, `t[i,j]` gets the element at `i*stride + j` in the
array pointed to by the Data uintptr.
More generally, `t[i0,i1,...,iN-2,iN-1]` gets the element at
i0 * stride[0] + i1 * stride[1] + ... + iN-2 * stride[N-2] + iN-1
When a new slice is allocated, `Stride` is set to `Cap[N-1]`.
Slicing is as simple as updating the pointer, lengths, and capacities.
t[i0:j0:k0, i1:j1:k1, ..., iN-1:jN-1:kN-1]
causes `Data` to update to the element indexed by `[i0,i1,...,iN-1]`,
`Len[d] = jd - id`, `Cap[d] = kd - id`, and Stride is unchanged.
## Implementation Schedule
Help is needed to determine the when and who for the implementation of this
proposal.
The gonum team would translate the code in gonum/matrix, gonum/blas,
and gonum/lapack to assist with testing the implementation.
## Non-goals
This proposal intentionally omits several suggested behaviors.
This is not to say those proposals can't ever be added (nor does it imply that
they will be added), but that they provide additional complications and can be
part of a separate proposal.
### Append
This proposal does not allow append to be used with higher-dimensional slices.
It seems natural that one could, say, append a [,]T to the "end" of a [,,]T,
but the interaction with slicing is tricky.
If a new slice is allocated, does it fill in the gaps with zero values?
### Arithmetic Operators
Some have called for slices to support arithmetic operators (+, -, *) to also
work on `[,]Numeric` (`int`, `float64`, etc.), for example
a := make([,]float64, 1000, 3000)
b := make([,]float64, 3000, 2000)
c := a*b
While operators can allow for very succinct code, they do not seem to fit in Go.
Go's arithmetic operators only work on numeric types, they don't work on slices.
Secondly, arithmetic operators in Go are all fast, whereas the operation above
is many orders of magnitude more expensive than a floating point multiply.
Finally, multiplication could either mean element-wise multiplication, or
standard matrix multiplication.
Both operations are needed in numerical work, so such a proposal would require
additional operators to be added (such as `.*`).
Especially in terms of clock cycles per character, `c.Mul(a,b)` is not that bad.
## Conclusion
Matrices are widely used in numerical algorithms, and have been used in
computing arguably even before there were computers.
With time and effort, Go could be a great language for numerical computing (for
all of the same reasons it is a great general-purpose language), but first it
needs a rectangular data structure, the extension of slices to higher dimensions,
built into the language as a foundation for more advanced libraries.
This proposal describes a behavior for slices which is a strict improvement over
the options currently available.
It will be faster than the single-slice representation (index optimization and
range), more convenient than the slice of slice representation (range, copy,
len), and will provide a correct representation of the data that is more compile-
time verifiable than the struct representation.
The desire for slices is not driven by syntax and ease-of-use, though that is a
huge benefit, but instead a request for safety and speed; the desire to build
"simple, reliable, and efficient software".
| | Correct Representation | Access/Assignment Convenience | Speed |
| -------------: | :--------------------: | :---------------------------: | :---: |
| Slice of slice | X | ✓ | X |
| Single slice | X | X | ✓ |
| Struct type | ✓ | X | X |
| Built-in | ✓ | ✓ | ✓ |
## Open issues
1. In the discussion, it was mentioned that adding a SliceHeader2 is a bad idea.
This can be removed from the proposal, but some other mechanism should be added
that allows data in 2d-slices to be passed to C.
It has been suggested that the type
type NDimSliceHeader struct {
Data unsafe.Pointer
Stride []int // len(N-1)
Len []int // len(N)
Cap []int // len(N)
}
would be sufficient.
2. The "reshaping" syntax as discussed above.
3. In a slice literal, if part of the slice is specified with a key-element literal,
does the whole expression need to use key-element syntax?
4. Given the presence of `unshape`, is there any use for three-element syntax? |
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/12166-subtests.md | # Proposal: testing: programmatic sub-test and sub-benchmark support
Author: Marcel van Lohuizen
_With input from Sameer Ajmani, Austin Clements, Russ Cox, Bryan Mills, and Damien Neil._
Last updated: September 2, 2015
Discussion at https://golang.org/issue/12166.
## Abstract
This proposal introduces programmatic subtest and subbenchmarks for a variety
of purposes.
## Background
Adding a Run methods for spawning subtests and subbenchmarks addresses a variety
of features, including:
* an easy way to select single tests and benchmarks from a table from the
command line (e.g. for debugging),
* simplify writing a collection of similar benchmarks,
* use of Fail and friends in subtests,
* creating subtests from an external/dynamic table source,
* more control over scope of setup and teardown than TestMain provides,
* more control over parallelism,
* cleaner code, compared to many top-level functions, both for tests and
benchmarks, and
* eliminate need to prefix each error message in a test with its subtest's name.
## Proposal
The proposal for tests and benchmarks are discussed separately below. A separate
section explains logging and how to select subbenchmarks and subtests on the
command line.
### Subtests
T gets the following method:
```go
// Run runs f as a subtest of t called name. It reports whether f succeeded.
// Run will block until all its parallel subtests have completed.
func (t *T) Run(name string, f func(t *testing.T)) bool
```
Several methods get further clarification on their behavior for subfunctions.
Changes inbetween square brackets:
```go
// Fail marks the function [and its calling functions] as having failed but
// continues execution.
func (c *common) Fail()
// FailNow marks the function as having failed, stops its execution
// [and aborts pending parallel subtests].
// Execution will continue at the next test or benchmark.
// FailNow must be called from the goroutine running the
// test or benchmark function, not from other goroutines
// created during the test. Calling FailNow does not stop
// those other goroutines.
func (c *common) FailNow()
// SkipNow marks the test as having been skipped, stops its execution
// [and aborts pending parallel subtests].
// ... (analoguous to FailNow)
func (c *common) SkipNow()
```
A NumFailed method might be useful as well.
#### Examples
A simple example:
```go
tests := []struct {
A, B int
Sum int
}{
{ 1, 2, 3 },
{ 1, 1, 2 },
{ 2, 1, 3 },
}
func TestSum(t *testing.T) {
for _, tc := range tests {
t.Run(fmt.Sprint(tc.A, "+", tc.B), func(t *testing.T) {
if got := tc.A + tc.B; got != tc.Sum {
t.Errorf("got %d; want %d", got, tc.Sum)
}
})
}
}
```
Note that we write `t.Errorf("got %d; want %d")` instead of something like
`t.Errorf("%d+%d = %d; want %d")`: the subtest's name already uniquely
identifies the test.
Select (sub)tests from the command line using -test.run:
```go
go test --run=TestFoo/1+2 # selects the first test in TestFoo
go test --run=TestFoo/1+ # selects tests for which A == 1 in TestFoo
go test --run=.*/1+ # for any top-level test, select subtests matching ”1+”
```
Skipping a subtest will not terminate subsequent tests in the calling test:
```go
func TestFail(t *testing.T) {
for i, tc := range tests {
t.Run(fmt.Sprint(tc.A, "+", tc.B), func(t *testing.T) {
if tc.A < 0 {
t.Skip(i) // terminate test i, but proceed with test i+1.
}
if got := tc.A + tc.B; got != tc.Sum {
t.Errorf("got %d; want %d", got, tc.Sum)
}
})
}
}
```
A more concrete and realistic use case is an adaptation of bufio.TestReader:
```go
func TestReader(t *testing.T) {
var texts [31]string
str := ""
all := ""
for i := 0; i < len(texts)-1; i++ {
texts[i] = str + "\n"
all += texts[i]
str += string(i%26 + 'a')
}
texts[len(texts)-1] = all
for _, readmaker := range readMakers {
t.Run("readmaker="+readMaker.name, func(t *testing.T) {
for _, text := range texts {
for _, bufreader := range bufreaders {
for _, bufsize := range bufsizes {
read := readmaker.fn(strings.NewReader(text))
buf := NewReaderSize(read, bufsize)
s := bufreader.fn(buf)
if s != text {
t.Fatalf("fn=%s bufsize=%d got=%q; want=%q",
bufreader.name, bufsize, got, text)
}
}
}
}
})
}
}
```
In this example the use of `t.Fatalf` avoids getting a large number of similar
errors for different buffer sizes.
In case of failure, testing will resume with testing the next reader.
Run a subtest in parallel:
```go
func TestParallel(t *testing.T) {
for _, tc := range tests {
tc := tc // Must capture the range variable.
t.Run(tc.Name, func(t *testing.T) {
t.Parallel()
...
})
}
}
```
Run teardown code after a few tests:
func TestTeardown(t *testing.T) {
t.Run("Test1", test1)
t.Run("Test2", test2)
t.Run("Test3", test3)
// teardown code.
}
Run teardown code after parallel tests:
```go
func TestTeardownParallel(t *testing.T) {
// By definition, this Run will not return until the parallel tests finish.
t.Run("block", func(t *testing.T) {
t.Run("Test1", parallelTest1)
t.Run("Test2", parallelTest2)
t.Run("Test3", parallelTest3)
})
// teardown code.
}
```
Test1-Test3 will run in parallel with each other, but not with any other
parallel tests.
This follows from the fact that `Run` will block until _all_ subtests have
completed and that both TestTeardownParallel and "block" are sequential.
### Flags
The -test.run flag allows filtering of subtests given a prefix path where each
path component is a regular expression, applying the following rules:
* Each test has its own (sub) name.
* Each test has a level. A top-level test (e.g. `func TestFoo`) has level 1, a
test invoked with Run by such test has level 2, a test invoked by such a
subtest level 3, and so forth.
* A (sub)test is run if the (level-1)-th regular expression
resulting from splitting -test.run by '/' matches the test's name or if this
regular expression is not defined.
So for a -test.run not containing a '/' the behavior is identical to the current
behavior.
* Spaces (as defined by unicode.IsSpace) are replaced by underscores.
Rules for -test.bench are analogous, except that only a non-empty -test.bench
flag triggers running of benchmarks.
The implementation of these flags semantics can be done later.
####Examples:
Select top-level test “TestFoo” and all its subtests:
go test --run=TestFoo
Select top-level tests that contain the string “Foo” and all their subtests that
contain the string “A:3”.
go test --run=Foo/A:3
Select all subtests of level 2 which name contains the string “A:1 B:2” for any
top-level test:
go test --run=.*/A:1_B:2
The latter could match, for example, struct{A, B int}{1, 2} printed with %+v.
### Subbenchmarks
The following method would be added to B:
// Run benchmarks f as a subbenchmark with the given name. It reports
// whether there were any failures.
//
// A subbenchmark is like any other benchmark. A benchmark that calls Run at
// least once will not be measured itself and will only run for one iteration.
func (b *B) Run(name string, f func(b *testing.B)) bool
The `Benchmark` function gets an additional clarification (addition between []):
// Benchmark benchmarks a single function. Useful for creating
// custom benchmarks that do not use the "go test" command.
// [
// If f calls Run, the result will be an estimate of running all its
// subbenchmarks that don't call Run in sequence in a single benchmark.]
func Benchmark(f func(b *B)) BenchmarkResult
See the Rational section for an explanation.
### Example
The following code shows the use of two levels of subbenchmarks.
It is based on a possible rewrite of
golang.org/x/text/unicode/norm/normalize_test.go.
```go
func BenchmarkMethod(b *testing.B) {
for _, tt := range allMethods {
b.Run(tt.name, func(b *testing.B) {
for _, d := range textdata {
fn := tt.f(NFC, []byte(d.data)) // initialize the test
b.Run(d.name, func(b *testing.B) {
b.SetBytes(int64(len(d.data)))
for i := 0; i < b.N; i++ {
fn()
}
})
}
})
}
}
var allMethods = []struct {
name string
f func(to Form, b []byte) func()
}{
{"Transform", transform },
{"Iter", iter },
...
}
func transform(f Form, b []byte) func()
func iter(f Form, b []byte) func()
var textdata = []struct { name, data string }{
{"small_change", "No\u0308rmalization"},
{"small_no_change", "nörmalization"},
{"ascii", ascii},
{"all", txt_all},
}
```
Note that there is some initialization code above the second Run.
Because it is outside of Run, there is no need to call ResetTimer.
As `Run` starts a new Benchmark, it is not possible to hoist the SetBytes call
in a similar manner.
The output for the above benchmark, without additional logging, could look something like:
BenchmarkMethod/Transform/small_change-8 200000 668 ns/op 22.43 MB/s
BenchmarkMethod/Transform/small_no_change-8 1000000 100 ns/op 139.13 MB/s
BenchmarkMethod/Transform/ascii-8 10000 22430 ns/op 735.60 MB/s
BenchmarkMethod/Transform/all-8 1000 128511 ns/op 43.82 MB/s
BenchmarkMethod/Iter/small_change-8 200000 701 ns/op 21.39 MB/s
BenchmarkMethod/Iter/small_no_change-8 500000 321 ns/op 43.52 MB/s
BenchmarkMethod/Iter/ascii-8 1000 210633 ns/op 78.33 MB/s
BenchmarkMethod/Iter/all-8 1000 235950 ns/op 23.87 MB/s
BenchmarkMethod/ToLower/small_change-8 300000 475 ns/op 31.57 MB/s
BenchmarkMethod/ToLower/small_no_change-8 500000 239 ns/op 58.44 MB/s
BenchmarkMethod/ToLower/ascii-8 500 297486 ns/op 55.46 MB/s
BenchmarkMethod/ToLower/all-8 1000 151722 ns/op 37.12 MB/s
BenchmarkMethod/QuickSpan/small_change-8 2000000 70.0 ns/op 214.20 MB/s
BenchmarkMethod/QuickSpan/small_no_change-8 1000000 115 ns/op 120.94 MB/s
BenchmarkMethod/QuickSpan/ascii-8 5000 25418 ns/op 649.13 MB/s
BenchmarkMethod/QuickSpan/all-8 1000 175954 ns/op 32.01 MB/s
BenchmarkMethod/Append/small_change-8 200000 721 ns/op 20.78 MB/s
ok golang.org/x/text/unicode/norm 5.601s
The only change in the output is the characters allowable in the Benchmark name.
The output is identical in the absence of subbenchmarks.
This format is compatible with tools like benchstats.
### Logging
Logs for tests are printed hierarchically. Example:
--- FAIL: TestFoo (0.03s)
display_test.go:75: setup issue
--- FAIL: TestFoo/{Alpha:1_Beta:1} (0.01s)
display_test.go:75: Foo(Beta) = 5: want 6
--- FAIL: TestFoo/{Alpha:1_Beta:3} (0.01s)
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: setup issue
--- FAIL: TestFoo/{Alpha:1_Beta:4} (0.01s)
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
--- FAIL: TestFoo/{Alpha:1_Beta:8} (0.01s)
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
display_test.go:75: Foo(Beta) = 5; want 6
--- FAIL: TestFoo/{Alpha:1_Beta:9} (0.03s)
display_test.go:75: Foo(Beta) = 5; want 6
For each header, we include the full name, repeating the name of the parent.
This makes it easier to identify the specific test from within the local context
and obviates the need for tools to keep track of context.
For benchmarks we adopt a different strategy. For benchmarks, it is important to
be able to relate the logs that might have influenced performance to the
respective benchmark.
This means we should ideally interleave the logs with the benchmark results.
```
--- BENCH: BenchmarkForm/from_NFC-8
normalize_test.go:768: Some message
BenchmarkForm/from_NFC/canonical/to_NFC-8 10000 15914 ns/op 166.64 MB/s
--- BENCH: BenchmarkForm/from_NFC-8
normalize_test.go:768: Some message
--- BENCH: BenchmarkForm/from_NFC/canonical-8
normalize_test.go:776: Some message.
BenchmarkForm/from_NFC/canonical/to_NFD-8 10000 15914 ns/op 166.64 MB/s
--- BENCH: BenchmarkForm/from_NFC/canonical-8
normalize_test.go:776: Some message.
--- BENCH: BenchmarkForm/from_NFC/canonical/to_NFD-8
normalize_test.go:789: Some message.
normalize_test.go:789: Some message.
normalize_test.go:789: Some message.
BenchmarkForm/from_NFC/canonical/to_NFKC-8 10000 15170 ns/op 174.82 MB/s
--- BENCH: BenchmarkForm/from_NFC/canonical-8
normalize_test.go:776: Some message.
BenchmarkForm/from_NFC/canonical/to_NFKD-8 10000 15881 ns/op 166.99 MB/s
--- BENCH: BenchmarkForm/from_NFC/canonical-8
normalize_test.go:776: Some message.
BenchmarkForm/from_NFC/ext_latin/to_NFC-8 5000 30720 ns/op 52.86 MB/s
--- BENCH: BenchmarkForm/from_NFC-8
normalize_test.go:768: Some message
BenchmarkForm/from_NFC/ext_latin/to_NFD-8 2000 71258 ns/op 22.79 MB/s
--- BENCH: BenchmarkForm/from_NFC/ext_latin/to_NFD-8
normalize_test.go:789: Some message.
normalize_test.go:789: Some message.
normalize_test.go:789: Some message.
BenchmarkForm/from_NFC/ext_latin/to_NFKC-8 5000 32233 ns/op 50.38 MB/s
```
No bench results are printed for "parent" benchmarks.
## Rationale
### Alternative
One alternative to the given proposal is to define variants of tests as
top-level tests or benchmarks that call helper functions.
For example, the use case explained above could be written as:
```go
func doSum(t *testing.T, a, b, sum int) {
if got := a + b; got != sum {
t.Errorf("got %d; want %d", got, sum)
}
}
func TestSumA1B2(t *testing.T) { doSum(t, 1, 2, 3) }
func TestSumA1B1(t *testing.T) { doSum(t, 1, 1, 2) }
func TestSumA2B1(t *testing.T) { doSum(t, 2, 1, 3) }
```
This approach can work well for smaller sets, but starts to get tedious for
larger sets.
Some disadvantages of this approach:
1. considerably more typing for larger test sets (less code, much larger test cases),
1. duplication of information in test name and test values,
1. may get unwieldy if a Cartesian product of multiple tables is used as source,
1. doesn't work well with dynamic table sources,
1. does not allow for the same flexibility of inserting setup and teardown code
as using Run,
1. does not allow for the same flexibility in parallelism as using Run,
1. no ability to terminate early a subgroup of tests.
Some of these objections can be addressed by generating the test cases.
It seems, though, that addressing anything beyond point 1 and 2 with generation
would require more complexity than the addition of Run introduces.
Overall, it seems that the benefits of the proposed addition outweigh the
benefits of an approach using generation as well as expanding tests by hand.
### Subtest semantics
A _subtest_ refers to a call to Run and a _test function_ refers to the function
f passed to Run.
A subtest will be like any other test.
In fact, top-level tests are semantically equivalent to subtests of a single
main test function.
For all subtests holds:
1. a Fail of a subtest causes the Fail of all of its ancestor tests,
2. a FailNow of a test also causes its uncompleted descendants to be skipped,
but does not cause any of its ancestor tests to be skipped,
3. any subtest of a test must finish within the scope of this calling test,
4. any Parallel test function will run only after the enclosing test function
returns,
5. at most --test.parallel subtests will run concurrently at any time.
The combination of 3 and 4 means that all subtests marked as `Parallel` run
after the enclosing test function returns but before the Run method invoking
this test function returns.
This corresponds to the semantics of `Parallel` as it exists today.
These semantics enhance consistency: a call to `FailNow` will always terminate
the same set of subtests.
These semantics also guarantee that sequential tests are always run exclusively,
while only parallel tests can run together.
Also, parallel tests created by one sequentially running test will never run in
parallel with parallel tests created by another sequentially running test.
These simple rules allow for fairly extensive control over parallelism.
### Subbenchmark semantics
The `Benchmark` function defines the `BenchmarkResult` to be the result of
running all of its subbenchmarks in sequence.
This is equivalent to returning N == 1 and then the sum of all values for all
benchmarks, normalized to a single iteration.
It may be more appropriate to use a geometric mean, but as some of the values
may be zero the usage of such is somewhat problematic.
The proposed definition is meaningful and the user can still compute geometric
means by replacing calls to Run with calls to Benchmark if needed.
The main purpose of this definition is to define some semantics to using `Run`
in functions passed to `Benchmark`.
### Logging
The rules for logging subtests are:
* Each subtest maintains its own buffer to which it logs.
* The Main test uses os.Stdout as its “buffer”.
* When a subtest finishes, it flushes its buffer to the parent test’s buffer,
prefixed with a header to identify the subtest.
* Each subtest logs to the buffer with an indentation corresponding to its level.
These rule are consistent with the sub-test semantics presented earlier.
Combined with these semantics, logs have the following properties:
* Logs from a parallel subtests always come after the logs of its parent.
* Logs from a parallel subtests immediately follow the output of their parent.
* Messages logged by sequential tests will appear in chronological order in the
overall test logs.
* Each logged message is only displayed once.
* The output is identical to the old log format absent calls to t.Run.
* The output is identical except for non-Letter characters being allowed in
names if subtests are used.
Printing hierarchically makes the relation between tests visually clear.
It also avoids repeating printing some headers.
For benchmarks the priorities for logging are different.
It is important to visually correlate the logs with the benchmark lines.
It is also relatively rare to log a lot during benchmarking, so repeating some
headers is less of an issue.
The proposed logging scheme for benchmarks takes this into account.
As an alternative, we could use the same approach for benchmarks as for tests.
In that case, logs would only be printed after each top-level test.
For example:
```
BenchmarkForm/from_NFC/canonical/to_NFC-8 10000 23609 ns/op 112.33 MB/s
BenchmarkForm/from_NFC/canonical/to_NFD-8 10000 16597 ns/op 159.78 MB/s
BenchmarkForm/from_NFC/canonical/to_NFKC-8 10000 17188 ns/op 154.29 MB/s
BenchmarkForm/from_NFC/canonical/to_NFKD-8 10000 16082 ns/op 164.90 MB/s
BenchmarkForm/from_NFD/overflow/to_NFC-8 300 441589 ns/op 38.34 MB/s
BenchmarkForm/from_NFD/overflow/to_NFD-8 300 483748 ns/op 35.00 MB/s
BenchmarkForm/from_NFD/overflow/to_NFKC-8 300 467694 ns/op 36.20 MB/s
BenchmarkForm/from_NFD/overflow/to_NFKD-8 300 515475 ns/op 32.85 MB/s
--- FAIL: BenchmarkForm
--- FAIL: BenchmarkForm/from_NFC
normalize_test.go:768: Some failure.
--- BENCH: BenchmarkForm/from_NFC/canonical
normalize_test.go:776: Just a message.
normalize_test.go:776: Just a message.
--- BENCH: BenchmarkForm/from_NFC/canonical/to_NFD-8
normalize_test.go:789: Some message
normalize_test.go:789: Some message
normalize_test.go:789: Some message
normalize_test.go:776: Just a message.
normalize_test.go:776: Just a message.
normalize_test.go:768: Some failure.
…
--- FAIL: BenchmarkForm-8/from_NFD
normalize_test.go:768: Some failure.
…
normalize_test.go:768: Some failure.
--- BENCH: BenchmarkForm-8/from_NFD/overflow
normalize_test.go:776: Just a message.
normalize_test.go:776: Just a message.
--- BENCH: BenchmarkForm-8/from_NFD/overflow/to_NFD
normalize_test.go:789: Some message
normalize_test.go:789: Some message
normalize_test.go:789: Some message
normalize_test.go:776: Just a message.
normalize_test.go:776: Just a message.
BenchmarkMethod/Transform/small_change-8 100000 1165 ns/op 12.87 MB/s
BenchmarkMethod/Transform/small_no_change-8 1000000 103 ns/op 135.26 MB/s
…
```
It is still easy to see which logs influenced results (those marked `BENCH`), but
the user will have to align the logs with the result lines to correlate the data.
## Compatibility
The API changes are fully backwards compatible.
It introduces several minor changes in the logs:
* Names of tests and benchmarks may contain additional printable, non-space runes.
* Log items for tests may be indented.
benchstats and benchcmp may need to be adapted to take into account the
possibility of duplicate names.
## Implementation
Most of the work would be done by the author of this proposal.
The first step consists of some minor refactorings to make the diffs for
implementing T.Run and B.Run as small as possible.
Subsequently, T.Run and B.Run can be implemented individually.
Although the capability for parallel subtests will be implemented in the first
iteration, they will initially only be allowed for top-level tests.
Once we have a good way to detect improper usage of range variables, we could
open up parallelism by introducing Go or enable calling `Parallel` on subtests.
The aim is to have the first implementations of T.Run and B.Run in for 1.7.
<!-- ### Implementation details
The proposed concurrency model only requires a small extension to the existing model. Handling communication between tests and subtests is new and is done by means of sync.WaitGroups.
The “synchronization life-cycle” of a parallel test is as follows:
While the parent is still blocking on the subtest, a subtest’s call to Parallel will:
1. Add the test to the parent's list of subtests that should run in parallel. (**new**)
2. Add 1 to the parent’s waitgroup. (**new**)
3. Signal the parent the subtest is detaching and will be run in parallel.
While the parent running unblocked, the subtest will:
4. Wait for a channel that will be used to receive signal to run in parallel. (**new**)
5. Wait for a signal on the channel received in Step 4.
6. Run test.
7. Post list of parallel subtests for this test for release to concurrency manager. (**new**) The concurrency manager will signal these tests they may run in parallel (see 4 and 5).
8. Signal completion to the manager goroutine spawned during the call to Parallel.
The manager goroutine, run concurrently with a test if Parallel is used:
9. Wait for completion of the test.
10. Signal the concurrency manager the test is done (decrease running count).
11. Wait for the test’s subtests to complete (through test’s WaitGroup). (**new**)
12. Flush report to parent. (Done by concurrency manager in current implementation.)
13. Signal parent completion (through parent’s WaitGroup).
Notes:
Unlike the old implementation, tests will only acquire a channel to receive a signal on to be run in parallel after the parent releases it to them (step 4.). This helps to retain the clarity in the code to see that subtests cannot be run earlier inadvertently.
Top-level subtest share the same code-base as subtests run with t.Run. Under the hood top-level test would be started with t.Run as well. -->
## Open issues
### Parallelism
Using Parallel in combination with closures is prone to the “forgetting to
capture a range variable” problem.
We could define a Go method analogous to Run, defined as follows:
```go
func (t *T) Go(name string, f func(t *T)) {
t.Run(name, func(t *T) {
t.Parallel()
f(t)
}
}
```
This suffers from the same problem, but at least would make it a) more explicit
that a range variable requires capturing and b) makes it easier to detect misuse
by go vet.
If it is possible for go vet to detect whether t.Parallel is in the call graph
of t.Run and whether the closure refers to a range variable this would be
sufficient and the Go method might not be necessary.
At first we could prohibit calls to Parallel from within subtests until we
decide on one of these methods or find a better solution.
### Teardown
We showed how it is possible to insert teardown code after running a few
parallel tests.
Thought not difficult, it is a bit clumsy.
We could add the following method to make this easier:
```go
// Wait blocks until all parallel subtests have finished. It will Skip
// the current test if more than n subtests have failed. If n < 0 it will
// wait for all subtests to complete.
func (t *T) Wait(n numFailures)
```
The documentation in Run would have to be slightly changed to say that Run will
call Wait(-1) before returning.
The parallel teardown example could than be written as:
```go
func TestTeardownParallel(t *testing.T) {
t.Go("Test1", parallelTest1)
t.Go("Test2", parallelTest2)
t.Go("Test3", parallelTest3)
t.Wait(-1)
// teardown code.
}
```
This could be added later if there seems to be a need for it.
The introduction of Wait would only require a minimal and backward compatible
change to the sub-test semantics.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/15292-generics.md | # Proposal: Go should have generics
Author: [Ian Lance Taylor](iant@golang.org)
Created: January 2011
Last updated: April 2016
Discussion at https://golang.org/issue/15292
## Abstract
Go should support some form of generic programming.
Generic programming enables the representation of algorithms and data
structures in a generic form, with concrete elements of the code
(such as types) factored out.
It means the ability to express algorithms with minimal assumptions
about data structures, and vice-versa
(paraphrasing [Jazayeri, et al](https://www.dagstuhl.de/en/program/calendar/semhp/?semnr=98171)).
## Background
### Generic arguments in favor of generics
People can write code once, saving coding time.
People can fix a bug in one instance without having to remember to fix it
in others.
Generics avoid boilerplate: less coding by copying and editing.
Generics save time testing code: they increase the amount of code
that can be type checked at compile time rather than at run time.
Every statically typed language in current use has generics in one
form or another (even C has generics, where they are called preprocessor macros;
[example](https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/vec.h?revision=165314&view=markup&pathrev=165314)).
### Existing support for generic programming in Go
Go already supports a form of generic programming via interfaces.
People can write an abstract algorithm that works with any type that
implements the interface.
However, interfaces are limited because the methods must use specific types.
There is no way to write an interface with a method that takes an
argument of type T, for any T, and returns a value of the same type.
There is no way to write an interface with a method that compares two
values of the same type T, for any T.
The assumptions that interfaces require about the types that satisfy
them are not minimal.
Interfaces are not simply types; they are also values.
There is no way to use interface types without using interface values,
and interface values aren’t always efficient.
There is no way to create a slice of the dynamic type of an interface.
That is, there is no way to avoid boxing.
### Specific arguments in favor of generics in Go
Generics permit type-safe polymorphic containers.
Go currently has a very limited set of such containers: slices, and
maps of most but not all types.
Not every program can be written using a slice or map.
Look at the functions `SortInts`, `SortFloats`, `SortStrings` in the
sort package.
Or `SearchInts`, `SearchFloats`, `SearchStrings`.
Or the `Len`, `Less`, and `Swap` methods of `byName` in package io/ioutil.
Pure boilerplate copying.
The `copy` and `append` functions exist because they make slices much
more useful.
Generics would mean that these functions are unnecessary.
Generics would make it possible to write similar functions for maps
and channels, not to mention user created data types.
Granted, slices are the most important composite data type, and that’s why
these functions were needed, but other data types are still useful.
It would be nice to be able to make a copy of a map.
Right now that function can only be written for a specific map type,
but, except for types, the same code works for any map type.
Similarly, it would be nice to be able to multiplex one channel onto
two, without having to rewrite the function for each channel type.
One can imagine a range of simple channel manipulators, but they can
not be written because the type of the channel must be specified
explicitly.
Generics let people express the relationship between function parameters
and results.
Consider the simple Transform function that calls a function on every
element of a slice, returning a new slice.
We want to write something like
```
func Transform(s []T, f func(T) U) []U
```
but this can not be expressed in current Go.
In many Go programs, people only have to write explicit types in function
signatures.
Without generics, they also have to write them in another place: in the
type assertion needed to convert from an interface type back to the
real type.
The lack of static type checking provided by generics makes the code
heavier.
### What we want from generics in Go
Any implementation of generics in Go should support the following.
* Define generic types based on types that are not known until they are instantiated.
* Write algorithms to operate on values of these types.
* Name generic types and name specific instantiations of generic types.
* Use types derived from generic types, as in making a slice of a generic type,
or conversely, given a generic type known to be a slice, defining a variable
with the slice’s element type.
* Restrict the set of types that may be used to instantiate a generic type, to
ensure that the generic type is only instantiated with types that support the
required operations.
* Do not require an explicit relationship between the definition of a generic
type or function and its use. That is, programs should not have to
explicitly say *type T implements generic G*.
* Write interfaces that describe explicit relationships between generic types,
as in a method that takes two parameters that must both be the same unknown type.
* Do not require explicit instantiation of generic types or functions; they
should be instantiated as needed.
### The downsides of generics
Generics affect the whole language.
It is necessary to evaluate every single language construct to see how
it will work with generics.
Generics affect the whole standard library.
It is desirable to have the standard library make effective use of generics.
Every existing package should be reconsidered to see whether it would benefit
from using generics.
It becomes tempting to build generics into the standard library at a
very low level, as in C++ `std::basic_string<char, std::char_traits<char>, std::allocator<char> >`.
This has its benefits—otherwise nobody would do it—but it has
wide-ranging and sometimes surprising effects, as in incomprehensible
C++ error messages.
As [Russ pointed out](https://research.swtch.com/generic), generics are
a trade off between programmer time, compilation time, and execution
time.
Go is currently optimizing compilation time and execution time at the
expense of programmer time.
Compilation time is a significant benefit of Go.
Can we retain compilation time benefits without sacrificing too much
execution time?
Unless we choose to optimize execution time, operations that appear
cheap may be more expensive if they use values of generic type.
This may be subtly confusing for programmers.
I think this is less important for Go than for some other languages,
as some operations in Go already have hidden costs such as array
bounds checks.
Still, it would be essential to ensure that the extra cost of using
values of generic type is tightly bounded.
Go has a lightweight type system.
Adding generic types inevitably makes the type system more complex.
It is essential that the result remain lightweight.
The upsides of the downsides are that Go is a relatively small
language, and it really is possible to consider every aspect of the
language when adding generics.
At least the following sections of the spec would need to be extended:
Types, Type Identity, Assignability, Type assertions, Calls, Type
switches, For statements with range clauses.
Only a relatively small number of packages will need to be
reconsidered in light of generics: container/*, sort, flag, perhaps
bytes.
Packages that currently work in terms of interfaces will generally be
able to continue doing so.
### Conclusion
Generics will make the language safer, more efficient to use, and more
powerful.
These advantages are harder to quantify than the disadvantages, but
they are real.
## Examples of potential uses of generics in Go
* Containers
* User-written hash tables that are compile-time type-safe, rather than
converting slice keys to string and using maps
* Sorted maps (red-black tree or similar)
* Double-ended queues, circular buffers
* A simpler Heap
* `Keys(map[K]V) []K`, `Values(map[K]V) []V`
* Caches
* Compile-time type-safe `sync.Pool`
* Generic algorithms that work with these containers in a type-safe way.
* Union/Intersection
* Sort, StableSort, Find
* Copy (a generic container, and also copy a map)
* Transform a container by applying a function--LISP `mapcar` and friends
* math and math/cmplx
* testing/quick.{`Check`,`CheckEqual`}
* Mixins
* like `ioutil.NopCloser`, but preserving other methods instead of
restricting to the passed-in interface (see the `ReadFoo` variants of
`bytes.Buffer`)
* protobuf `proto.Clone`
* Eliminate boilerplate when calling sort function
* Generic diff: `func [T] Diff(x, y []T) []range`
* Channel operations
* Merge N channels onto one
* Multiplex one channel onto N
* The [worker-pool pattern](https://play.golang.org/p/b5XRHnxzZF)
* Graph algorithms, for example immediate dominator computation
* Multi-dimensional arrays (not slices) of different lengths
* Many of the packages in go.text could benefit from it to avoid duplicate
implementation or APIs for `string` and `[]byte` variants; many points that
could benefit need high performance, though, and generics should provide that
benefit
## Proposal
I won’t discuss a specific implementation proposal here: my hope is
that this document helps show people that generics are worth having
provided the downsides can be kept under control.
The following documents are my previous generics proposals,
presented for historic reference. All are flawed in various ways.
* [Type functions](15292/2010-06-type-functions.md) (June 2010)
* [Generalized types](15292/2011-03-gen.md) (March 2011)
* [Generalized types](15292/2013-10-gen.md) (October 2013)
* [Type parameters](15292/2013-12-type-params.md) (December 2013)
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/13073-code-of-conduct.md | # Proposal: A Code of Conduct for the Go community
Author: Andrew Gerrand <adg@golang.org>
Last updated: 17 November 2015
## Abstract
This proposal specifies a Code of Conduct for the Go community.
The code is to be enforced in all project-operated spaces (specified
in the Code of Conduct text, below). Other Go-related spaces (forums,
events, etc) are encouraged to adopt the code as well.
## Background
Since Go’s release over 6 years ago, a sizable community has grown around the
language. The golang-nuts mailing list has more than 17k members and receives
thousands of posts each month, and there are many major Go conferences each
year with thousands of attendees.
Today the various Go spaces are moderated by people that are unknown to the
public and with no specified policy. It is not clear to the members of these
spaces how they are expected to conduct themselves. In the rare cases where
people are banned, they are afforded no recourse.
For a community of this scale to grow and prosper, it needs guidelines
to encourage productive and positive participation, and a process for
resolving conflict when it inevitably arises.
The community must also grow to survive. An explicit goal of this proposal
is to promote cultural diversity within our community and thereby make it more
welcoming and inclusive.
## Proposal
A Code of Conduct document is added to the “go” repository as
`doc/conduct.html`, visible on the web at https://golang.org/conduct.
The document is linked prominently from official Go spaces (such as the
golang-nuts mailing list).
The document text is as follows.
---
### About the Code of Conduct
#### Why have a Code of Conduct?
Online communities include people from many different backgrounds.
The Go contributors are committed to providing a friendly, safe and welcoming
environment for all, regardless of age, disability, gender, nationality, race,
religion, sexuality, or similar personal characteristic.
The first goal of the Code of Conduct is to specify a baseline standard
of behavior so that people with different social values and communication
styles can talk about Go effectively, productively, and respectfully.
The second goal is to provide a mechanism for resolving conflicts in the
community when they arise.
The third goal of the Code of Conduct is to make our community welcoming to
people from different backgrounds.
Diversity is critical to the project; for Go to be successful, it needs
contributors and users from all backgrounds.
(See [Go, Open Source, Community](https://blog.golang.org/open-source).)
With that said, a healthy community must allow for disagreement and debate.
The Code of Conduct is not a mechanism for people to silence others with whom
they disagree.
#### Where does the Code of Conduct apply?
If you participate in or contribute to the Go ecosystem in any way,
you are encouraged to follow the Code of Conduct while doing so.
Explicit enforcement of the Code of Conduct applies to the
official forums operated by the Go project (“Go spaces”):
- The official [GitHub projects](https://github.com/golang/)
and [code reviews](https://go-review.googlesource.com/).
- The [golang-nuts](https://groups.google.com/group/golang-nuts) and
[golang-dev](https://groups.google.com/group/golang-dev) mailing lists.
- The #go-nuts IRC channel on Freenode.
- The [/r/golang subreddit](https://reddit.com/r/golang).
Other Go groups (such as conferences, meetups, and other unofficial forums) are
encouraged to adopt this Code of Conduct. Those groups must provide their own
moderators and/or working group (see below).
### Gopher values
These are the values to which people in the Go community (“Gophers”) should aspire.
* Be friendly and welcoming
* Be patient
* Remember that people have varying communication styles and that not
everyone is using their native language.
(Meaning and tone can be lost in translation.)
* Be thoughtful
* Productive communication requires effort.
Think about how your words will be interpreted.
* Remember that sometimes it is best to refrain entirely from commenting.
* Be respectful
* In particular, respect differences of opinion.
* Be charitable
* Interpret the arguments of others in good faith, do not seek to disagree.
* When we do disagree, try to understand why.
* Avoid destructive behavior:
* Derailing: stay on topic; if you want to talk about something else,
start a new conversation.
* Unconstructive criticism: don't merely decry the current state of affairs;
offer—or at least solicit—suggestions as to how things may be improved.
* Snarking (pithy, unproductive, sniping comments)
* Discussing potentially offensive or sensitive issues;
this all too often leads to unnecessary conflict.
* Microaggressions: brief and commonplace verbal, behavioral and
environmental indignities that communicate hostile, derogatory or negative
slights and insults to a person or group.
People are complicated.
You should expect to be misunderstood and to misunderstand others;
when this inevitably occurs, resist the urge to be defensive or assign blame.
Try not to take offense where no offense was intended.
Give people the benefit of the doubt.
Even if the intent was to provoke, do not rise to it.
It is the responsibility of *all parties* to de-escalate conflict when it arises.
### Unwelcome behavior
These actions are explicitly forbidden in Go spaces:
* Insulting, demeaning, hateful, or threatening remarks.
* Discrimination based on age, disability, gender, nationality, race,
religion, sexuality, or similar personal characteristic.
* Bullying or systematic harassment.
* Unwelcome sexual advances.
* Incitement to any of these.
### Moderation
The Go spaces are not free speech venues; they are for discussion about Go.
These spaces have moderators.
The goal of the moderators is to facilitate civil discussion about Go.
When using the official Go spaces you should act in the spirit of the “Gopher
values”.
If you conduct yourself in a way that is explicitly forbidden by the CoC,
you will be warned and asked to stop.
If you do not stop, you will be removed from our community spaces temporarily.
Repeated, wilful breaches of the CoC will result in a permanent ban.
Moderators are held to a higher standard than other community members.
If a moderator creates an inappropriate situation, they should expect less
leeway than others, and should expect to be removed from their position if they
cannot adhere to the CoC.
Complaints about moderator actions must be handled using the reporting process
below.
### Reporting issues
The Code of Conduct Working Group is a group of people that represent the Go
community. They are responsible for handling conduct-related issues.
Their purpose is to de-escalate conflicts and try to resolve issues to the
satisfaction of all parties. They are:
* Aditya Mukerjee <dev@chimeracoder.net>
* Andrew Gerrand <adg@golang.org>
* Dave Cheney <dave@cheney.net>
* Jason Buberel <jbuberel@google.com>
* Peggy Li <peggyli.224@gmail.com>
* Sarah Adams <sadams.codes@gmail.com>
* Steve Francia <steve.francia@gmail.com>
* Verónica López <gveronicalg@gmail.com>
If you encounter a conduct-related issue, you should report it to the
Working Group using the process described below.
**Do not** post about the issue publicly or try to rally sentiment against a
particular individual or group.
* Mail [conduct@golang.org](mailto:conduct@golang.org) or
[submit an anonymous report](https://golang.org/s/conduct-report).
* Your message will reach the Working Group.
* Reports are confidential within the Working Group.
* Should you choose to remain anonymous then the Working Group cannot
notify you of the outcome of your report.
* You may contact a member of the group directly if you do not feel
comfortable contacting the group as a whole. That member will then raise
the issue with the Working Group as a whole, preserving the privacy of the
reporter (if desired).
* If your report concerns a member of the Working Group they will be recused
from Working Group discussions of the report.
* The Working Group will strive to handle reports with discretion and
sensitivity, to protect the privacy of the involved parties,
and to avoid conflicts of interest.
* You should receive a response within 48 hours (likely sooner).
(Should you choose to contact a single Working Group member,
it may take longer to receive a response.)
* The Working Group will meet to review the incident and determine what happened.
* With the permission of person reporting the incident, the Working Group
may reach out to other community members for more context.
* The Working Group will reach a decision as to how to act. These may include:
* Nothing.
* A request for a private or public apology.
* A private or public warning.
* An imposed vacation (for instance, asking someone to abstain for a week
from a mailing list or IRC).
* A permanent or temporary ban from some or all Go spaces.
* The Working Group will reach out to the original reporter to let them know
the decision.
* Appeals to the decision may be made to the Working Group,
or to any of its members directly.
**Note that the goal of the Code of Conduct and the Working Group is to resolve
conflicts in the most harmonious way possible.**
We hope that in most cases issues may be resolved through polite discussion and
mutual agreement.
Bannings and other forceful measures are to be employed only as a last resort.
Changes to the Code of Conduct (including to the members of the Working Group)
should be proposed using the
[change proposal process](https://golang.org/s/proposal-process).
### Summary
* Treat everyone with respect and kindness.
* Be thoughtful in how you communicate.
* Don’t be destructive or inflammatory.
* If you encounter an issue, please mail conduct@golang.org.
#### Acknowledgements
Parts of this document were derived from the Code of Conduct documents of the
Django, FreeBSD, and Rust projects.
---
## Rationale
*Do we need a Code of Conduct?*
Some community members have argued that people should be trusted to do the
right thing, or simply ignored when they do not.
To address the former: there are varying definitions of the “right thing”; a
Code of Conduct specifies what that means.
To address the latter: if we allow destructive forms of communication (those
that go against the "Gopher Values") to flourish, we will be left only with
people who enjoy that kind of communication.
The CoC also makes moderation processes more transparent: the various Go spaces
have always been moderated spaces, but the rules were never written down.
It seems better to be explicit about the behavior we expect in those spaces.
*Why write our own?*
There are many existing Codes of Conduct to choose from,
so we could have saved some time by simply re-using an existing one.
This document does draw heavily on existing documents such as the Rust,
FreeBSD, and Django Codes of Conduct, but it includes some original material
too.
I opted for this approach to specifically address the needs of the Go community
as I understand them.
*Is behavior outside Go spaces covered by this CoC?*
An earlier draft of this proposal included a clause that behavior outside Go
spaces may affect one’s ability to participate within them. After much
community feedback, I removed the clause. It was seen as unnecessarily
overreaching and as providing an opportunity for malicious people to oust
community members for their behavior unrelated to Go.
*The “Gopher Values” are not my values. I consider myself part of the Go
community; shouldn’t the CoC represent me, too?*
Members of the Go community are from many different cultures; it would be
impossible to represent the full range of social norms in a single document.
Instead, the values described by the CoC are designed to reflect the lowest
common denominator of behavior necessary for civil discourse.
Community members (including the author of this document) whose norms may be
regarded by others as impolite or aggressive are expected to be self-aware and
thoughtful in how they communicate to avoid creating conflict.
*The Code of Conduct document seems unnecessarily long.
Can’t it just be a version of the Golden Rule?*
The Go community comprises thousands of people from all over the world; it
seems unrealistic to assume that those individuals should have compatible ideas
about how to get along with each other.
By describing the kind of behavior to which one should aspire, and those
behaviors that are explicitly forbidden, we at least give all community members
an idea of what is expected of them.
Note that *this* document is a proposal document; the Code of Conduct itself is
a subset of the proposal document and is about 1,200 words long, including the
description of the reporting process. The “Gopher Values” section—the meat of
the thing—is under 300 words.
### Examples of CoC issues and their resolutions
These fictional examples show how community members and the working group might
use the Code of Conduct’s guidelines to resolve a variety of issues. In each
case, the goal of the intervention is to raise the level of discourse and to
make people feel welcome in our community spaces.
*Rude and unwelcoming behavior:*
* B and C are posting back and forth on a golang-nuts thread.
* D enters the conversation and proposes an alternative solution.
* B and C ignore D and continue their discussion.
* D re-articulates their point in a different way.
* B responds to D by asking them to "butt out".
* C emails B privately, noting that B's reaction was uncalled-for, and suggests that B apologize to D.
* B replies that they have nothing to apologize for.
* C reports the incident to the CoC Working Group.
* E, a member of the working group, contacts B and C separately to get details on the incident.
* E asks B to apologize for being unwelcoming and rude, and notifies C that this action was taken.
* B acknowledges their mistake and apologizes to D.
* The issue is resolved.
In this case, B could have avoided conflict by just saying nothing, or by
responding politely to D’s messages. It’s not OK to be rude or exclusive in the
public forum. (If B and C want to discuss things privately, they should take it
off-list.)
*A classic troll:*
* B repeatedly posts about a particular issue, raising the issue on unrelated
threads, as well as starting many redundant threads.
* The forum moderators warn B to stop, as derailing threads and spamming are
against the CoC.
* B refuses and becomes belligerent, making insulting remarks about the
moderators and other members of the community.
* The moderators ban B from the forum for 10 days, and post a message explaining this.
* The issue is resolved.
In this case, the goal is for the user to come back after a while and hopefully
be more productive in the future. We don’t want to ban them forever; it is
important to keep people with unpopular opinions around to prevent the
community from becoming an echo chamber.
*Condescending behavior:*
* B posts a message to /r/golang describing the approach they're planning to
take for wrapping http.HandlerFunc and asking for feedback.
* C replies to the message "Why don’t you just do it the obvious way?" with an
explanation of an alternate approach.
* D, a bystander to this exchange, replies “That may have been obvious to you,
but you shouldn’t assume that it’s obvious to just anyone.”
* C replies “Well I guess you’re an idiot too, then.”
* E, a moderator, observes this interaction and sends a message to C to tell
them their behavior is unacceptable.
* C insists there is nothing wrong with their behavior.
* E replies publicly to C on the original thread to say “Condescension and
personal insults are not appropriate in this forum. Please observe the Code
of Conduct when posting here.”
* The issue is resolved.
*A microaggression:*
* B has a typically female name and posts to a mailing list to announce a
Go-to-Forth compiler they wrote.
* C replies “I am amazed to see a woman doing such great work. Nice job!”
* B writes to the CoC Working Group to say “I felt really deflated for my work
to be seen as impressive just because I’m a woman. Can you say something to C
for me?”
* D, a member of the working group, reaches out to C to explain how comments
like that can make women feel out of place in technical communities.
* Recognizing the hurt C has caused, C sends an email to B to apologise.
* The issue is resolved.
In this case, we see the a working group member acting as a mediator for
someone who didn’t feel comfortable confronting someone directly.
The working group member D contacted C in private to discuss the issue,
to avoid bringing shame to C in public, since C apparently meant no harm.
*Impatient behavior:*
* B asks a seemingly simple question on the mailing list.
* C replies back with an answer and an additional comment to
"RTFM before you ask simple questions!!!"
* B points out the small detail C overlooked in the question to clarify the
complexity.
* C replies back "Ack! Sorry, I actually don't know."
* E enters the conversation and criticizes C for his behavior in his
original reply, raising the issue further with a moderator.
* D, a moderator, replies to the list to ask C to be less impatient in their responses,
suggesting they could have instead said something like "The docs cover this."
* The issue is resolved.
In this case, the moderators acted in a purely advisory role.
*An abrasive newcomer, a pithy response:*
* B makes their first post to golang-nuts:
"What's the deal with Go's error handling mechanism? It's brain dead and stupid."
* C, a forum regular, replies “Oh no, not another one.”
* D, another regular replies (compensating for C) “Hi, welcome to the list. As
I’m sure you can appreciate, this topic has come up many times before. Here
are some links to previous discussions and articles that discuss the topic.
Let us know if you have any questions that aren’t covered by those threads.
Thanks.”
* E, a moderator, reaches out to C off list to ask them to refrain from this
kind of pithy, unhelpful commentary in the future.
* There is no need for further action.
In this case, we see a community member (D) try to steer the discussion in a
productive direction, while a moderator tries to prevent future negativity
without creating a public drama.
## Compatibility
Some people feel stifled by the general concept of a Code of Conduct.
Others may find explicit efforts to improve diversity in our community
unpalatable for various reasons.
While a major goal of this proposal is to make the community more inclusive,
this does by definition exclude people that cannot abide by the goals and
principles of the code.
I see this as a regrettable but necessary and inescapable design tradeoff.
The implementation of the code may cause us to lose a few people, but we stand
to gain much more.
## Implementation
I (Andrew Gerrand) will submit the Code of Conduct text to the main Go
repository, so that it is available at the URL https://golang.org/conduct.
I will also set up the conduct@golang.org email address and the anonymous web
form.
Then I will link the document prominently from these places:
* `README.md` and `CONTRIBUTING.md` files in the official Go repositories.
* The golang-nuts and golang-dev mailing list welcome messages.
* The #go-nuts IRC channel topic.
* The /r/golang subreddit sidebar.
I will work with the existing moderators of these spaces to implement the Code
of Conduct in those spaces, recruiting additional moderators where necessary.
Operators of unofficial Go events and forums are encouraged to adopt this Code
of Conduct, so that our community members can enjoy a consistent experience
across venues.
## Open issues
* The Working Group does not yet include anyone from Asia, Europe, or Africa.
In particular, Europe and China are home to a large swath of Go users, so it
would be valuable to include some people from those areas in the working
group. (We have some volunteers from Europe already.)
* The proposed process does not "scale down" to projects of a single maintainer.
Future revisions should permit a lightweight version of the process, but that
is considered outside the scope of this document.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/2981-go-test-json.md | # Proposal: `-json` flag in `go test`
Author(s): Nodir Turakulov <nodir@google.com>
_With initial input by Russ Cox, Caleb Spare, Andrew Gerrand and Minux Ma._
Last updated: 2016-09-14
Discussion at https://golang.org/issue/2981.
* [Abstract](#abstract)
* [Background](#background)
* [Proposal](#proposal)
* [`testing` package](#testing-package)
* [Example output](#example-output)
* [Rationale](#rationale)
* [Compatibility](#compatibility)
* [Implementation](#implementation)
## Abstract
Add `-json` flag to `go test`.
When specified, `go test` stdout is JSON.
## Background
There is a clear need in parsing test and benchmark results by third party
tools, see feedback in https://golang.org/issue/2981.
Currently `go test` output format is suited for humans, but not computers.
Also a change to the current format may break existing programs that parse
`go test` output.
Currently, under certain conditions, `go test` streams test/benchmark results
so a user can see them as they happen.
Also streaming prevents losing data if `go test` crashes.
This proposal attempts to preserve streaming capability in the `-json` mode, so
third party tools interpreting `go test` output can stream results too.
`-json` flag was originally proposed by Russ Cox in
https://golang.org/issue/2981 in 2012. This proposal has several differences.
## Proposal
I propose the following user-visible changes:
* add `-json` flag to `go test`
* `-json`: `go test` stdout is a valid [JSON Text Sequence][rfc7464]
of JSON objects containing test binary artifacts.
Format below.
* `-json -v`: verbose messages are printed to stderr, so stdout contains
only JSON.
* `-json -n`: not supported
* `-json -x`: not supported
* In `testing` package
* Add `type State int` with constants to describe test/benchmark states.
* Add type `JSONResult` for JSON output.
* Change `Cover.CoveredPackages` field type from `string` to `[]string`.
Type definitions and details below.
### `testing` package
```go
// State is one of test/benchmark execution states.
// Implements fmt.Stringer, json.Marshaler and json.Unmarshaler.
type State int
const (
// RUN means a test/benchmark execution has started
RUN State = iota + 1
PASS
FAIL
SKIP
)
// JSONResult structs encoded in JSON are emitted by `go test` if -json flag is
// specified.
type JSONResult struct {
// Configuration is metadata produced by test/benchmark infrastructure.
// The interpretation of a key/value pair is up to tooling, but the key/value
// pair describes all test/benchmark results that follow,
// until overwritten by a JSONResult with a non-empty Configuration field.
//
// The key begins with a lowercase character (as defined by unicode.IsLower),
// contains no space characters (as defined by unicode.IsSpace)
// nor upper case characters (as defined by unicode.IsUpper).
// Conventionally, multiword keys are written with the words separated by hyphens,
// as in cpu-speed.
Configuration map[string]string `json:",omitempty"`
// Package is a full name of the package containing the test/benchmark.
// It is zero iff Name is zero.
Package string `json:",omitempty"`
// Name is the name of the test/benchmark that this JSONResult is about.
// It can be empty if JSONResult describes global state, such as
// Configuration or Stdout/Stderr.
Name string `json:",omitempty"`
// State is the current state of the test/benchmark.
// It is non-zero iff Name is non-zero.
State State `json:",omitempty"`
// Procs is the value of runtime.GOMAXPROCS for this test/benchmark run.
// It is specified only in the first JSONResult of a test/benchmark.
Procs int `json:",omitempty"`
// Log is log created by calling Log or Logf functions of *T or *B.
// A JSONResult with Log is emitted by go test as soon as possible.
// First occurrence of test/benchmark does not contain logs.
Log string `json:",omitempty"`
// Benchmark contains benchmark-specific details.
// It is emitted in the final JSONResult of a benchmark with a terminal
// State if the benchmark does not have sub-benchmarks.
Benchmark *BenchmarkResult `json:",omitempty"`
// CoverageMode is coverage mode that was used to run these tests.
CoverageMode string `json:",omitempty"
// TotalStatements is the number of statements checked for coverage.
TotalStatements int64 `json:",omitempty"`
// ActiveStatements is the number of statements covered by tests, examples
// or benchmarks.
ActiveStatements int64 `json:",omitempty"`
// CoveragedPackages is full names of packages included in coverage.
CoveredPackages []string `json:",omitempty"`
// Stdout is text written by the test binary directly to os.Stdout.
// If this field is non-zero, all others are zero.
Stdout string `json:",omitempty"`
// Stderr is text written by test binary directly to os.Stderr.
// If this field is non-zero, all others are zero.
Stderr string `json:",omitempty"`
}
```
### Example output
Here is an example of `go test -json` output.
It is simplified and commented for the convenience of the reader;
in practice it will be unindented and will contain JSON Text Sequence separators
and no comments.
```json
// go test emits environment configuration
{
"Configuration": {
"commit": "7cd9055",
"commit-time": "2016-02-11T13:25:45-0500",
"goos": "darwin",
"goarch": "amd64",
"cpu": "Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz",
"cpu-count": "8",
"cpu-physical-count": "4",
"os": "Mac OS X 10.11.3",
"mem": "16 GB"
}
}
// TestFoo started
{
"Package": "github.com/user/repo",
"Name": "TestFoo",
"State": "RUN",
"Procs": 4
}
// A line was written directly to os.Stdout
{
"Package": "github.com/user/repo",
"Stderr": "Random string written directly to os.Stdout\n"
}
// TestFoo passed
{
"Package": "github.com/user/repo",
"Name": "TestFoo",
"State": "PASS",
}
// TestBar started
{
"Package": "github.com/user/repo",
"Name": "TestBar",
"State": "RUN",
"Procs": 4
}
// TestBar logged a line
{
"Package": "github.com/user/repo",
"Name": "TestBar",
"State": "RUN",
"Log": "some test output"
}
// TestBar failed
{
"Package": "github.com/user/repo",
"Name": "TestBar",
"State": "FAIL"
}
// TestQux started
{
"Package": "github.com/user/repo",
"Name": "TestQux",
"State": "RUN",
"Procs": 4
}
// TestQux calls T.Fatal("bug")
{
"Package": "github.com/user/repo",
"Name": "TestBar",
"State": "RUN",
"Log": "bug"
}
{
"Package": "github.com/user/repo",
"Name": "TestQux",
"State": "FAIL"
}
// TestComposite started
{
"Package": "github.com/user/repo",
"Name": "TestComposite",
"State": "RUN",
"Procs": 4
}
// TestComposite/A=1 subtest started
{
"Package": "github.com/user/repo",
"Name": "TestComposite/A=1",
"State": "RUN",
"Procs": 4
}
// TestComposite/A=1 passed
{
"Package": "github.com/user/repo",
"Name": "TestComposite/A=1",
"State": "PASS",
}
// TestComposite passed
{
"Package": "github.com/user/repo",
"Name": "TestComposite",
"State": "PASS",
}
// Example1 started
{
"Package": "github.com/user/repo",
"Name": "Example1",
"State": "RUN",
"Procs": 4
}
// Example1 passed
{
"Package": "github.com/user/repo",
"Name": "Example1",
"State": "PASS"
}
// BenchmarkRun started
{
"Package": "github.com/user/repo",
"Name": "BenchmarkBar",
"State": "RUN",
"Procs": 4
}
// BenchmarkRun passed
{
"Package": "github.com/user/repo",
"Name": "BenchmarkBar",
"State": "PASS",
"Benchmark": {
"T": 1000000,
"N": 1000,
"Bytes": 100,
"MemAllocs": 10,
"MemBytes": 10
}
}
// BenchmarkComposite started
{
"Package": "github.com/user/repo",
"Name": "BenchmarkComposite",
"State": "RUN",
"Procs": 4
}
// BenchmarkComposite/A=1 started
{
"Package": "github.com/user/repo",
"Name": "BenchmarkComposite/A=1",
"State": "RUN",
"Procs": 4
}
// BenchmarkComposite/A=1 passed
{
"Package": "github.com/user/repo",
"Name": "BenchmarkComposite/A=1",
"State": "PASS",
"Benchmark": {
"T": 1000000,
"N": 1000,
"Bytes": 100,
"MemAllocs": 10,
"MemBytes": 10
}
}
// BenchmarComposite passed
{
"Package": "github.com/user/repo",
"Name": "BenchmarComposite",
"State": "PASS"
}
// Total coverage information in the end.
{
"CoverageMode": "set",
"TotalStatements": 1000,
"ActiveStatements": 900,
"CoveredPackages": [
"github.com/user/repo"
]
}
```
## Rationale
Alternatives:
* Add `-format` and `-benchformat` flags proposed in
https://github.com/golang/go/issues/12826.
While this is simpler to implement, users will have to do more work to
specify format and then parse it.
Trade offs:
* I propose to make `-json` mutually exclusive with `-n` and `-x` flags.
These flags belong to `go build` subcommand while this proposal is scoped
to `go test`.
Supporting the flags would require adding JSON output knowledge to
`go/build.go`.
* `JSONResult.Benchmark.T` provides duration of a benchmark run, but there is
not an equivalent for a test run.
This is a trade off for `JSONResult` simplicity.
We don't have to define `TestResult` because `JSONResult` is enough to
describe a test result.
Currently `go test` does not provide test timing info, so the proposal is
consistent with the current `go test` output.
## Compatibility
The only backwards incompatibility is changing `testing.Cover.CoveredPackages`
field type, but `testing.Cover` is not covered by Go 1 compatibility
guidelines.
## Implementation
Most of the work would be done by the author of this proposal.
The goal is to get agreement on this proposal and to complete the work
before the 1.8 freeze date.
[testStreamOutput]: https://github.com/golang/go/blob/0b248cea169a261cd0c2db8c014269cca5a170c4/src/cmd/go/test.go#L361-L369
[rfc7464]: https://tools.ietf.org/html/rfc7464
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/60773-execution-tracer-overhaul.md | # Execution tracer overhaul
Authored by mknyszek@google.com with a mountain of input from others.
In no particular order, thank you to Felix Geisendorfer, Nick Ripley, Michael
Pratt, Austin Clements, Rhys Hiltner, thepudds, Dominik Honnef, and Bryan
Boreham for your invaluable feedback.
## Background
[Original design document from
2014.](https://docs.google.com/document/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub)
Go execution traces provide a moment-to-moment view of what happens in a Go
program over some duration.
This information is invaluable in understanding program behavior over time and
can be leveraged to achieve significant performance improvements.
Because Go has a runtime, it can provide deep information about program
execution without any external dependencies, making traces particularly
attractive for large deployments.
Unfortunately limitations in the trace implementation prevent widespread use.
For example, the process of analyzing execution traces scales poorly with the
size of the trace.
Traces need to be parsed in their entirety to do anything useful with them,
making them impossible to stream.
As a result, trace parsing and validation has very high memory requirements for
large traces.
Also, Go execution traces are designed to be internally consistent, but don't
provide any way to align with other kinds of traces, for example OpenTelemetry
traces and Linux sched traces.
Alignment with higher level tracing mechanisms is critical to connecting
business-level tasks with resource costs.
Meanwhile alignment with lower level traces enables a fully vertical view of
application performance to root out the most difficult and subtle issues.
Thanks to work in Go 1.21 cycle, the execution tracer's run-time overhead was
reduced from about -10% throughput and +10% request latency in web services to
about 1% in both for most applications.
This reduced overhead in conjunction with making traces more scalable enables
some [exciting and powerful new opportunities](#use-cases) for the diagnostic.
Lastly, the implementation of the execution tracer has evolved organically over
time and it shows.
The codebase also has many old warts and some age-old bugs that make collecting
traces difficult, and seem broken.
Furthermore, many significant decision decisions were made over the years but
weren't thoroughly documented; those decisions largely exist solely in old
commit messages and breadcrumbs left in comments within the codebase itself.
## Goals
The goal of this document is to define an alternative implementation for Go
execution traces that scales up to large Go deployments.
Specifically, the design presented aims to achieve:
- Make trace parsing require a small fraction of the memory it requires today.
- Streamable traces, to enable analysis without storage.
- Fix age-old bugs and present a path to clean up the implementation.
Furthermore, this document will present the existing state of the tracer in
detail and explain why it's like that to justify the changes being made.
## Design
### Overview
The design is broken down into four parts:
- Timestamps and sequencing.
- Orienting against threads instead of Ps.
- Partitioning.
- Wire format cleanup.
These four parts are roughly ordered by how fundamental they are to the trace
design, and so the former sections are more like concrete proposals, while the
latter sections are more like design suggestions that would benefit from
prototyping.
The earlier parts can also be implemented without considering the latter parts.
Each section includes in the history and design of the existing system as well
to document the current system in one place, and to more easily compare it to
the new proposed system.
That requires, however, a lot of prose, which can obscure the bigger picture.
Here are the highlights of each section without that additional context.
**Timestamps and sequencing**.
- Compute timestamps from the OS's monotonic clock (`nanotime`).
- Use per-goroutine sequence numbers for establishing a partial order of events
(as before).
**Orienting against threads (Ms) instead of Ps**.
- Batch trace events by M instead of by P.
- Use lightweight M synchronization for trace start and stop.
- Simplify syscall handling.
- All syscalls have a full duration in the trace.
**Partitioning**.
- Traces are sequences of fully self-contained partitions that may be streamed.
- Each partition has its own stack table and string table.
- Partitions are purely logical: consecutive batches with the same ID.
- In general, parsers need state from the previous partition to get accurate
timing information.
- Partitions have an "earliest possible" timestamp to allow for imprecise
analysis without a previous partition.
- Partitions are bound by both a maximum wall time and a maximum size
(determined empirically).
- Traces contain an optional footer delineating partition boundaries as byte
offsets.
- Emit batch lengths to allow for rapidly identifying all batches within a
partition.
**Wire format cleanup**.
- More consistent naming scheme for event types.
- Separate out "reasons" that a goroutine can block or stop from the event
types.
- Put trace stacks, strings, and CPU samples in dedicated batches.
### Timestamps and sequencing
For years, the Go execution tracer has used the `cputicks` runtime function for
obtaining a timestamp.
On most platforms, this function queries the CPU for a tick count with a single
instruction.
(Intuitively a "tick" goes by roughly every CPU clock period, but in practice
this clock usually has a constant rate that's independent of CPU frequency
entirely.) Originally, the execution tracer used this stamp exclusively for
ordering trace events.
Unfortunately, many [modern
CPUs](https://docs.google.com/spreadsheets/d/1jpw5aO3Lj0q23Nm_p9Sc8HrHO1-lm9qernsGpR0bYRg/edit#gid=0)
don't provide such a clock that is stable across CPU cores, meaning even though
cores might synchronize with one another, the clock read-out on each CPU is not
guaranteed to be ordered in the same direction as that synchronization.
This led to traces with inconsistent timestamps.
To combat this, the execution tracer was modified to use a global sequence
counter that was incremented synchronously for each event.
Each event would then have a sequence number that could be used to order it
relative to other events on other CPUs, and the timestamps could just be used
solely as a measure of duration on the same CPU.
However, this approach led to tremendous overheads, especially on multiprocessor
systems.
That's why in Go 1.7 the [implementation
changed](https://go-review.googlesource.com/c/go/+/21512) so that each goroutine
had its own sequence counter.
The implementation also cleverly avoids including the sequence number in the
vast majority of events by observing that running goroutines can't actually be
taken off their thread until they're synchronously preempted or yield
themselves.
Any event emitted while the goroutine is running is trivially ordered after the
start.
The only non-trivial ordering cases left are where an event is emitted by or on
behalf of a goroutine that is not actively running (Note: I like to summarize
this case as "emitting events at a distance" because the scheduling resource
itself is not emitting the event bound to it.) These cases need to be able to be
ordered with respect to each other and with a goroutine starting to run (i.e.
the `GoStart` event).
In the current trace format, there are only two such cases: the `GoUnblock`
event, which indicates that a blocked goroutine may start running again and is
useful for identifying scheduling latencies, and `GoSysExit`, which is used to
determine the duration of a syscall but may be emitted from a different P than
the one the syscall was originally made on.
(For more details on the `GoSysExit` case see in the [next section](#batching).)
Furthermore, there are versions of the `GoStart, GoUnblock`, and `GoSysExit`
events that omit a sequence number to save space if the goroutine just ran on
the same P as the last event, since that's also a case where the events are
trivially serialized.
In the end, this approach successfully reduced the trace overhead from over 3x
to 10-20%.
However, it turns out that the trace parser still requires timestamps to be in
order, leading to the infamous ["time stamps out of
order"](https://github.com/golang/go/issues/16755) error when `cputicks`
inevitably doesn't actually emit timestamps in order.
Ps are a purely virtual resource; they don't actually map down directly to
physical CPU cores at all, so it's not even reasonable to assume that the same P
runs on the same CPU for any length of time.
Although we can work around this issue, I propose we try to rely on the
operating system's clock instead and fix up timestamps as a fallback.
The main motivation behind this change is alignment with other tracing systems.
It's already difficult enough to try to internally align the `cputicks` clock,
but making it work with clocks used by other traces such as those produced by
distributed tracing systems is even more difficult.
On Linux, the `clock_gettime` syscall, called `nanotime` in the runtime, takes
around 15ns on average when called in a loop.
This is compared to `cputicks'` 10ns.
Trivially replacing all `cputicks` calls in the current tracer with `nanotime`
reveals a small performance difference that depends largely on the granularity
of each result.
Today, `cputicks'` is divided by 64.
On a 3 GHz processor, this amounts to a granularity of about 20 ns.
Replacing that with `nanotime` and no time division (i.e. nanosecond
granularity) results in a 0.22% geomean regression in throughput in the Sweet
benchmarks.
The trace size also increases by 17%.
Dividing `nanotime` by 64 we see approximately no regression and a trace size
decrease of 1.7%.
Overall, there seems to be little performance downside to using `nanotime`,
provided we pick an appropriate timing granularity: what we lose by calling
`nanotime`, we can easily regain by sacrificing a small amount of precision.
And it's likely that most of the precision below 128 nanoseconds or so is noise,
given the average cost of a single call into the Go scheduler (~250
nanoseconds).
To give us plenty of precision, I propose a target timestamp granularity of 64
nanoseconds.
This should be plenty to give us fairly fine-grained insights into Go program
behavior while also keeping timestamps small.
As for sequencing, I believe we must retain the per-goroutine sequence number
design as-is.
Relying solely on a timestamp, even a good one, has significant drawbacks.
For one, issues arise when timestamps are identical: the parser needs to decide
on a valid ordering and has no choice but to consider every possible ordering of
those events without additional information.
While such a case is unlikely with nanosecond-level timestamp granularity, it
totally precludes making timestamp granularity more coarse, as suggested in the
previous paragraph.
A sequencing system that's independent of the system's clock also retains the
ability of the tracing system to function despite a broken clock (modulo
returning an error when timestamps are out of order, which again I think we
should just work around).
Even `clock_gettime` might be broken on some machines!
How would a tracing system continue to function despite a broken clock? For that
I propose making the trace parser fix up timestamps that don't line up with the
partial order.
The basic idea is that if the parser discovers a partial order edge between two
events A and B, and A's timestamp is later than B's, then the parser applies A's
timestamp to B.
B's new timestamp is in turn propagated later on in the algorithm in case
another partial order edge is discovered between B and some other event C, and
those events' timestamps are also out-of-order.
There's one last issue with timestamps here on platforms for which the runtime
doesn't have an efficient nanosecond-precision clock at all, like Windows.
(Ideally, we'd make use of system calls to obtain a higher-precision clock, like
[QPC](https://learn.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps),
but [calls to this API can take upwards of a hundred nanoseconds on modern
hardware, and even then the resolution is on the order of a few hundred
nanoseconds](https://github.com/golang/go/issues/8687#issuecomment-694498710).)
On Windows we can just continue to use `cputicks` to get the precision and rely
on the timestamp fixup logic in the parser, at the cost of being unable to
reliably align traces with other diagnostic data that uses the system clock.
### Orienting by threads (Ms) instead of Ps
Today the Go runtime batches trace events by P.
That is, trace events are grouped in batches that all happened, in order, on the
same P.
A batch is represented in the runtime as a buffer, 32 KiB in size, which is
attached to a P when it's actively being written to.
Events are written to this P's buffer in their encoded form.
This design choice allows most event writes to elide synchronization with the
rest of the runtime and linearizes the trace with respect to a P, which is
crucial to [sequencing without requiring a global total
order](#timestamps-and-sequencing).
Batching traces by any core scheduling resource (G, M, or P) could in principle
have similar properties.
At a glance, there are a few reasons Ps make a better choice.
One reason is that there are generally a small number of Ps compared to Ms and
Gs, minimizing the maximum number of buffers that can be active at any given
time.
Another reason is convenience.
When batching, tracing generally requires some kind of synchronization with all
instances of its companion resource type to get a consistent trace.
(Think of a buffer which has events written to it very infrequently.
It needs to be flushed at some point before the trace can be considered
complete, because it may contain critical information needed to sequence events
in other buffers.) Furthermore, synchronization when starting a trace is also
generally useful, as that provides an opportunity to inspect the state of the
world and write down details about it, simplifying validation.
Stopping the world is a convenient way to get the attention of all Ps in the Go
runtime.
However, there are problems with batching by P that make traces more complex
than necessary.
The core of these problems lies with the `GoSysExit` event.
This event requires special arrangements in both the runtime and when validating
traces to ensure a consistent trace.
The difficulty with this event is that it's emitted by a goroutine that was
blocked in a syscall and lost its P, and because it doesn't have a P, it might
race with the runtime enabling and disabling tracing.
Therefore it needs to wait until it has a P to avoid that race.
(Note: the tracing system does have a place to put events when there is no P
available, but that doesn't help in this case.
The tracing system uses the fact that it stops-the-world to synchronize with
`GoSysExit` by preventing it from writing an event until the trace system can
finish initialization.)
The problem with `GoSysExit` stems from a fundamental mismatch: Ms emit events,
but only Ps are preemptible by the Go scheduler.
This really extends to any situation where we'd like an M to emit an event when
it doesn't have a P, say for example when it goes to sleep after dropping its P
and not finding any work in the scheduler, and it's one reason why we don't have
any M-specific events at all today.
So, suppose we batch trace events by M instead.
In the case of `GoSysExit`, it would always be valid to write to a trace buffer,
because any synchronization would have to happen on the M instead of the P, so
no races with stopping the world.
However, this also means the tracer _can't_ simply stop the world, because
stopping the world is built around stopping user Go code, which runs with a P.
So, the tracer would have to use something else (more on this later).
Although `GoSysExit` is simpler, `GoSysBlock` becomes slightly more complex in
the case where the P is retaken by `sysmon`.
In the per-P world it could be written into the buffer of the taken P, so it
didn't need any synchronization.
In the per-M world, it becomes an event that happens "at a distance" and so
needs a sequence number from the syscalling goroutine for the trace consumer to
establish a partial order.
However, we can do better by reformulating the syscall events altogether.
Firstly, I propose always emitting the full time range for each syscall.
This is a quality-of-life choice that may increase trace overheads slightly, but
also provides substantially more insight into how much time is spent in
syscalls.
Syscalls already take ~250 nanoseconds at a baseline on Linux and are unlikely
to get faster in the future for security reasons (due to Spectre and Meltdown)
and the additional event would never contain a stack trace, so writing it out
should be quite fast.
(Nonetheless, we may want to reconsider emitting these events for cgo calls.)
The new events would be called, for example, `GoSyscallBegin` and
`GoSyscallEnd`.
If a syscall blocks, `GoSyscallEnd` is replaced with `GoSyscallEndBlocked`
(more on this later).
Secondly, I propose adding an explicit event for stealing a P.
In the per-M world, keeping just `GoSysBlock` to represent both a goroutine's
state transition and the state of a P is not feasible, because we're revealing
the fact that fundamentally two threads are racing with one another on the
syscall exit.
An explicit P-stealing event, for example `ProcSteal`, would be required to
be ordered against a `GoSyscallEndBlocked`, with the former always happening
before.
The precise semantics of the `ProcSteal` event would be a `ProcStop` but one
performed by another thread.
Because this means events can now happen to P "at a distance," the `ProcStart`,
`ProcStop`, and `ProcSteal` events all need sequence numbers.
Note that the naive emission of `ProcSteal` and `GoSyscallEndBlocked` will
cause them to race, but the interaction of the two represents an explicit
synchronization within the runtime, so the parser can always safely wait for the
`ProcSteal` to emerge in the frontier before proceeding.
The timestamp order may also not be right, but since [we already committed to
fixing broken timestamps](#timestamps-and-sequencing) in general, this skew will
be fixed up by that mechanism for presentation.
(In practice, I expect the skew to be quite small, since it only happens if the
retaking of a P races with a syscall exit.)
Per-M batching might also incur a higher memory cost for tracing, since there
are generally more Ms than Ps.
I suspect this isn't actually too big of an issue since the number of Ms is
usually close to the number of Ps.
In the worst case, there may be as many Ms as Gs! However, if we also [partition
the trace](#partitioning), then the number of active buffers will only be
proportional to the number of Ms that actually ran in a given time window, which
is unlikely to be an issue.
Still, if this does become a problem, a reasonable mitigation would be to simply
shrink the size of each trace buffer compared to today.
The overhead of the tracing slow path is vanishingly small, so doubling its
frequency would likely not incur a meaningful compute cost.
Other than those three details, per-M batching should function identically to
the current per-P batching: trace events may already be safely emitted without a
P (modulo `GoSysExit` synchronization), so we're not losing anything else with
the change.
Instead, however, what we gain is a deeper insight into thread execution.
Thread information is currently present in execution traces, but difficult to
interpret because it's always tied to P start and stop events.
A switch to per-M batching forces traces to treat Ps and Ms orthogonally.
Given all this, I propose switching to per-M batching.
The only remaining question to resolve is trace synchronization for Ms.
(As an additional minor consideration, I would also like to propose adding the
batch length to the beginning of each batch.
Currently (and in the foreseeable future), the trace consumer needs to iterate
over the entire trace once to collect all the batches for ordering.
We can speed up this process tremendously by allowing the consumer to _just_
collect the information it needs.)
#### M synchronization
The runtime already contains a mechanism to execute code on every M via signals,
`doAllThreadsSyscall`.
However, traces have different requirements than `doAllThreadsSyscall`, and I
think we can exploit the specifics of these requirements to achieve a more
lightweight alternative.
First, observe that getting the attention of every M for tracing is not strictly
necessary: Ms that never use the tracer need not be mentioned in the trace at
all.
This observation allows us to delegate trace state initialization to the M
itself, so we can synchronize with Ms at trace start simply by atomically
setting a single global flag.
If an M ever writes into the trace buffer, then it will initialize its state by
emitting an event indicating what it was doing when tracing started.
For example, if the first event the M is going to emit is `GoBlock`, then it
will emit an additional event before that that indicates the goroutine was
running since the start of the trace (a hypothetical `GoRunning` event).
Disabling tracing is slightly more complex, as the tracer needs to flush every
M's buffer or identify that its buffer was never written to.
However, this is fairly straightforward to do with per-M seqlocks.
Specifically, Ms would double-check the `trace.enabled` flag under the seqlock
and anything trying to stop a trace would first disable the flag, then iterate
over every M to make sure it _observed_ its seqlock was unlocked.
This guarantees that every M observed or will observe the new flag state.
There's just one problem with all this: it may be that an M might be running and
never emit an event.
This case is critical to capturing system dynamics.
As far as I can tell, there are three cases here: the M is running user code
without emitting any trace events (e.g. a tight loop), the M is in a system call
or C code the whole time, or the M is `sysmon`, a special thread that always
runs without a P.
The first case is fairly easy to deal with: because it's running with a P, we
can just preempt it and establish an invariant that any preemption always emits
an event if the M's trace state has not been initialized.
The second case is a little more subtle, but luckily not very complex to
implement.
The tracer can identify whether the M has a G blocked in a syscall (or
equivalently has called into C code) just before disabling tracing globally by
checking `readgstatus(m.curg) == _Gsyscall` on each M's G and write it down.
If the tracer can see that the M never wrote to its buffer _after_ it disables
tracing, it can safely conclude that the M was still in a syscall at the moment
when tracing was disabled, since otherwise it would have written to the buffer.
Note that since we're starting to read `gp.atomicstatus` via `readgstatus`, we
now need to ensure consistency between the G's internal status and the events
we emit representing those status changes, from the tracer's perspective.
Thus, we need to make sure we hold the M's trace seqlock across any
`gp.atomicstatus` transitions, which will require a small refactoring of the
runtime and the tracer.
For the third case, we can play basically the same trick as the second case, but
instead check to see if sysmon is blocked on a note.
If it's not, it's running.
If it doesn't emit any events (e.g. to stop on a note) then the tracer can
assume that it's been running, if its buffer is empty.
A big change with this synchronization mechanism is that the tracer no longer
obtains the state of _all_ goroutines when a trace starts, only those that run
or execute syscalls.
The remedy to this is to simply add it back in.
Since the only cases not covered are goroutines in `_Gwaiting` or `_Grunnable`,
the tracer can set the `_Gscan` bit on the status to ensure the goroutine can't
transition out.
At that point, it collects a stack trace and writes out a status event for the
goroutine inside buffer that isn't attached to any particular M.
To avoid ABA problems with a goroutine transitioning out of `_Gwaiting` or
`_Grunnable` and then back in, we also need an atomically-accessed flag for each
goroutine that indicates whether an event has been written for that goroutine
yet.
The result of all this synchronization is that the tracer only lightly perturbs
application execution to start and stop traces.
In theory, a stop-the-world is no longer required at all, but in practice
(especially with [partitioning](#partitioning)), a brief stop-the-world to begin
tracing dramatically simplifies some of the additional synchronization necessary.
Even so, starting and stopping a trace will now be substantially more lightweight
than before.
### Partitioning
The structure of the execution trace's binary format limits what you can do with
traces.
In particular, to view just a small section of the trace, the entire trace needs
to be parsed, since a batch containing an early event may appear at the end of
the trace.
This can happen if, for example, a P that stays relatively idle throughout the
trace wakes up once at the beginning and once at the end, but never writes
enough to the trace buffer to flush it.
To remedy this, I propose restructuring traces into a stream of self-contained
partitions, called "generations."
More specifically, a generation is a collection of trace batches that represents
a complete trace.
In practice this means each generation boundary is a global buffer flush.
Each trace batch will have a generation number associated with, and generations
will not interleave.
The way generation boundaries are identified in the trace is when a new batch
with a different number is identified.
The size of each generation will be roughly constant: new generation boundaries
will be created when either the partition reaches some threshold size _or_ some
maximum amount of wall-clock time has elapsed.
The exact size threshold and wall-clock limit will be determined empirically,
though my general expectation for the size threshold is around 16 MiB.
Generation boundaries will add a substantial amount of implementation
complexity, but the cost is worth it, since it'll fundamentally enable new
use-cases.
The trace parser will need to be able to "stitch" together events across
generation boundaries, and for that I propose the addition of a new `GoStatus`
event.
This event is a generalization of the `GoInSyscall` and `GoWaiting` events.
This event will be emitted the first time a goroutine is about to be mentioned
in a trace, and will carry a state enum argument that indicates what state the
goroutine was in *before* the next event to be emitted.
The state emitted for a goroutine, for more tightly coupling events to states,
will be derived from the event about to be emitted (except for the "waiting" and
"syscall" cases where it's explicit).
The trace parser will be able to use these events to validate continuity across
generations.
This global buffer flush can be implemented as an extension to the
aforementioned [M synchronization](#m-synchronization) design by replacing the
`enabled` flag with a generation counter and doubling up much of the trace
state.
The generation counter will be used to index into the trace state, allowing for
us to dump and then toss old trace state, to prepare it for the next generation
while the current generation is actively being generated.
In other words, it will work akin to ending a trace, but for just one half of
the trace state.
One final subtlety created by partitioning is that now we may no longer have a
stop-the-world in between complete traces (since a generation is a complete
trace).
This means some events may be active when a trace is starting, and we'll lose
that information.
For user-controlled event types (user tasks and user regions), I propose doing
nothing special.
It's already true that if a task or region was in progress we lose that
information, and it's difficult to track that efficiently given how the API is
structured.
For range event types we control (sweep, mark assist, STW, etc.), I propose
adding the concept of "active" events which are used to indicate that a
goroutine or P has been actively in one of these ranges since the start of a
generation.
These events will be emitted alongside the `GoStatus` event for a goroutine.
(This wasn't mentioned above, but we'll also need a `ProcStatus` event, so we'll
also emit these events as part of that processes as well.)
### Event cleanup
Since we're modifying the trace anyway, this is a good opportunity to clean up
the event types, making them simpler to implement, simpler to parse and
understand, and more uniform overall.
The three biggest changes I would like to propose are
1. A uniform naming scheme,
1. Separation of the reasons that goroutines might stop or block from the event
type.
1. Placing strings, stacks, and CPU samples in their own dedicated batch types.
Firstly, for the naming scheme, I propose the following rules (which are not
strictly enforced, only guidelines for documentation):
- Scheduling resources, such as threads and goroutines, have events related to
them prefixed with the related resource (i.e. "Thread," "Go," or "Proc").
- Scheduling resources have "Create" and "Destroy" events.
- Scheduling resources have generic "Status" events used for indicating their
state at the start of a partition (replaces `GoInSyscall` and `GoWaiting`).
- Scheduling resources have "Start" and "Stop" events to indicate when that
resource is in use.
The connection between resources is understood through context today.
- Goroutines also have "Block" events which are like "Stop", but require an
"Unblock" before the goroutine can "Start" again.
- Note: Ps are exempt since they aren't a true resource, more like a
best-effort semaphore in practice.
There's only "Start" and "Stop" for them.
- Events representing ranges of time come in pairs with the start event having
the "Begin" suffix and the end event having the "End" suffix.
- Events have a prefix corresponding to the deepest resource they're associated
with.
Secondly, I propose moving the reasons why a goroutine resource might stop or
block into an argument.
This choice is useful for backwards compatibility, because the most likely
change to the trace format at any given time will be the addition of more
detail, for example in the form of a new reason to block.
Thirdly, I propose placing strings, stacks, and CPU samples in their own
dedicated batch types.
This, in combination with batch sizes in each batch header, will allow the trace
parser to quickly skim over a generation and find all the strings, stacks, and
CPU samples.
This makes certain tasks faster, but more importantly, it simplifies parsing and
validation, since the full tables are just available up-front.
It also means that the event batches are able to have a much more uniform format
(one byte event type, followed by several varints) and we can keep all format
deviations separate.
Beyond these big three, there are a myriad of additional tweaks I would like to
propose:
- Remove the `FutileWakeup` event, which is no longer used (even in the current
implementation).
- Remove the `TimerGoroutine` event, which is no longer used (even in the
current implementation).
- Rename `GoMaxProcs` to `ProcsChange`.
- This needs to be written out at trace startup, as before.
- Redefine `GoSleep` as a `GoBlock` and `GoUnblock` pair.
- Break out `GoStartLabel` into a more generic `GoLabel` event that applies a
label to a goroutine the first time it emits an event in a partition.
- Change the suffixes of pairs of events representing some activity to be
`Begin` and `End`.
- Ensure all GC-related events contain the string `GC`.
- Add [sequence numbers](#timestamps-and-sequencing) to events where necessary:
- `GoStart` still needs a sequence number.
- `GoUnblock` still needs a sequence number.
- Eliminate the {`GoStart,GoUnblock,Go}Local` events for uniformity.
- Because traces are no longer bound by stopping the world, and can indeed now
start during the GC mark phase, we need a way to identify that a GC mark is
currently in progress.
Add the `GCMarkActive` event.
- Eliminate inline strings and make all strings referenced from a table for
uniformity.
- Because of partitioning, the size of the string table is unlikely to become
large in practice.
- This means all events can have a fixed size (expressed in terms of
the number of arguments) and we can drop the argument count bits from the
event type.
A number of these tweaks above will likely make traces bigger, mostly due to
additional information and the elimination of some optimizations for uniformity.
One idea to regain some of this space is to compress goroutine IDs in each
generation by maintaining a lightweight mapping.
Each time a goroutine would emit an event that needs the current goroutine
ID, it'll check a g-local cache.
This cache will contain the goroutine's alias ID for the current partition
and the partition number.
If the goroutine does not have an alias for the current partition, it
increments a global counter to acquire one and writes down the mapping of
that counter to its full goroutine ID in a global table.
This table is then written out at the end of each partition.
But this is optional.
## Implementation
As discussed in the overview, this design is intended to be implemented in
parts.
Some parts are easy to integrate into the existing tracer, such as the switch to
nanotime and the traceback-related improvements.
Others, such as per-M batching, partitioning, and the change to the event
encoding, are harder.
While in theory all could be implemented separately as an evolution of the
existing tracer, it's simpler and safer to hide the changes behind a feature
flag, like a `GOEXPERIMENT` flag, at least to start with.
Especially for per-M batching and partitioning, it's going to be quite
complicated to try to branch in all the right spots in the existing tracer.
Instead, we'll basically have two trace implementations temporarily living
side-by-side, with a mostly shared interface.
Where they differ (and there are really only a few spots), the runtime can
explicitly check for the `GOEXPERIMENT` flag.
This also gives us an opportunity to polish the new trace implementation.
The implementation will also require updating the trace parser to make tests
work.
I suspect this will result in what is basically a separate trace parser that's
invoked when the new header version is identified.
Once we have a basic trace parser that exports the same API, we can work on
improving the internal trace parser API to take advantage of the new features
described in this document.
## Use-cases
This document presents a large change to the existing tracer under the banner of
"traces at scale" which is a bit abstract.
To better understand what that means, let's explore a few concrete use-cases
that this design enables.
### Flight recording
Because the trace in this design is partitioned, it's now possible for the
runtime to carry around the most recent trace partition.
The runtime could then expose this partition to the application as a snapshot of
what the application recently did.
This is useful for debugging at scale, because it enables the application to
collect data exactly when something goes wrong.
For instance:
- A server's health check fails.
The server takes a snapshot of recent execution and puts it into storage
before exiting.
- Handling a request exceeds a certain latency threshold.
The server takes a snapshot of the most recent execution state and uploads it
to storage for future inspection.
- A program crashes when executing in a deployed environment and leaves behind a
core dump.
That core dump contains the recent execution state.
Here's a quick design sketch:
```go
package trace
// FlightRecorder represents
type FlightRecorder struct { ... }
// NewFlightRecorder creates a new flight recording configuration.
func NewFlightRecorder() *FlightRecorder
// Start begins process-wide flight recording. Only one FlightRecorder
// may be started at once. If another FlightRecorder is started, this
// function will return an error.
//
// This function can be used concurrently with Start and Stop.
func (fr *FlightRecorder) Start() error
// TakeSnapshot collects information about the execution of this program
// from the last few seconds and write it to w. This FlightRecorder must
// have started to take a snapshot.
func (fr *FlightRecorder) TakeSnapshot(w io.Writer) error
// Stop ends process-wide flight recording. Stop must be called on the
// same FlightRecorder that started recording.
func (fr *FlightRecorder) Stop() error
```
- The runtime accumulates trace buffers as usual, but when it makes a partition,
it puts the trace data aside as it starts a new one.
- Once the new partition is finished, the previous is discarded (buffers are
reused).
- At any point, the application can request a snapshot of the current trace
state.
- The runtime immediately creates a partition from whatever data it has, puts
that together with the previously-accumulated partition, and hands it off to
the application for reading.
- The runtime then continues accumulating trace data in a new partition while
the application reads the trace data.
- The application is almost always guaranteed two partitions, with one being as
large as a partition would be in a regular trace (the second one is likely to
be smaller).
- Support for flight recording in the `/debug/pprof/trace` endpoint.
### Fleetwide trace collection
Today some power users collect traces at scale by setting a very modest sampling
rate, e.g. 1 second out of every 1000 seconds that a service is up.
This has resulted in collecting a tremendous amount of useful data about
execution at scale, but this use-case is currently severely limited by trace
performance properties.
With this design, it should be reasonable to increase the sampling rate by an
order of magnitude and collect much larger traces for offline analysis.
### Online analysis
Because the new design allows for partitions to be streamed and the trace
encoding is much faster to process, it's conceivable that a service could
process its own trace continuously and aggregate and filter only what it needs.
This online processing avoids the need to store traces for future inspection.
The kinds of processing I imagine for this includes:
- Detailed task latency breakdowns over long time horizons.
- CPU utilization estimates for different task types.
- Semantic processing and aggregation of user log events.
- Fully customizable flight recording (at a price).
### Linux sched trace association
Tooling can be constructed that interleaves Go execution traces with traces
produced via:
`perf sched record --clockid CLOCK_MONOTONIC <command>`
## Prior art
### JFR
[JFR or "Java Flight
Recorder](https://developers.redhat.com/blog/2020/08/25/get-started-with-jdk-flight-recorder-in-openjdk-8u#using_jdk_flight_recorder_with_jdk_mission_control)"
is an execution tracing system for Java applications.
JFR is highly customizable with a sophisticated wire format.
It supports very detailed configuration of events via an XML configuration file,
including the ability to enable/disable stack traces per event and set a latency
threshold an event needs to exceed in order to be sampled (e.g. "only sample
file reads if they exceed 20ms").
It also supports custom user-defined binary-encoded events.
By default, JFR offers a low overhead configuration (<1%) for "continuous
tracing" and slightly higher overhead configuration (1-2%) for more detailed
"profile tracing."
These configurations are just default configuration files that ship with the
JVM.
Continuous tracing is special in that it accumulates data into an internal
global ring buffer which may be dumped at any time.
Notably this data cannot be recovered from a crashed VM, though it can be
recovered in a wide variety of other cases.
Continuous tracing is disabled by default.
The JFR encoding scheme is quite complex, but achieves low overhead partially
through varints.
JFR traces are also partitioned, enabling scalable analysis.
In many ways the existing Go execution tracer looks a lot like JFR, just without
partitioning, and with a simpler encoding.
### KUTrace
[KUTrace](https://github.com/dicksites/KUtrace) is a system-wide execution
tracing system for Linux.
It achieves low overhead, high resolution, and staggering simplicity by cleverly
choosing to trace on each kernel transition (the "goldilocks" point in the
design space, as the author puts it).
It uses a fairly simple 8-byte-word-based encoding scheme to keep trace writing
fast, and exploits the very common case of a system call returning quickly
(>90%) to pack two events into each word when possible.
### This project's low-overhead tracing inspired this effort, but in the end we
didn't take too many of its insights.
### CTF
[CTF or "Common Trace Format](https://diamon.org/ctf/#spec7)" is more of a meta
tracing system.
It defines a binary format, a metadata format (to describe that binary format),
as well as a description language.
In essence, it contains all the building blocks of a trace format without
defining one for a specific use-case.
Traces in CTF are defined as a series of streams of events.
Events in a stream are grouped into self-contained packets.
CTF contains many of the same concepts that we define here (packets are batches;
the streams described in this document are per-thread, etc.), though it largely
leaves interpretation of the trace data up to the one defining a CTF-based trace
format.
## Future work
### Event encoding
The trace format relies heavily on LEB128-encoded integers.
While this choice makes the trace quite compact (achieving 4-6 bytes/event, with
some tricks to keep each encoded integer relatively small), it comes at a cost
to decoding speed since LEB128 is well-known for being relatively slow to decode
compared to similar integer encodings.
(Note: this is only a hunch at present; this needs to be measured.) (The cost at
encoding time is dwarfed by other costs, like tracebacks, so it's not on the
critical path for this redesign.)
This section proposes a possible future trace encoding that is simpler, but
without any change would produce a much larger trace.
It then proposes two possible methods of compressing the trace from there.
#### Encoding format
To start with, let's redefine the format as a series of 4-byte words of data.
Each event has a single header word that includes the event type (8 bits), space
for a 4-bit event reason, and the timestamp delta (20 bits).
Note that every event requires a timestamp.
At a granularity of 64 nanoseconds, this gives us a timestamp range of ~1
second, which is more than enough for most cases.
In the case where we can't fit the delta in 24 bits, we'll emit a new
"timestamp" event which is 2 words wide, containing the timestamp delta from the
start of the partition.
This gives us a 7-byte delta which should be plenty for any trace.
Each event's header is then followed by a number of 4-byte arguments.
The minimum number of arguments is fixed per the event type, and will be made
evident in self-description.
A self-description table at the start of the trace could indicate (1) whether
there are a variable number of additional arguments, (2) which argument
indicates how many there are, and (3) whether any arguments are byte arrays
(integer followed by data).
Byte array data lengths are always rounded up to 4 bytes.
Possible arguments include, for example, per-goroutine sequence numbers,
goroutine IDs, thread IDs, P IDs, stack IDs, and string IDs.
Within a partition, most of these trivially fit in 32 bits:
- Per-goroutine sequence numbers easily fit provided each partition causes a
global sequence number reset, which is straightforward to arrange.
- Thread IDs come from the OS as `uint32` on all platforms we support.
- P IDs trivially fit within 32 bits.
- Stack IDs are local to a partition and as a result trivially fit in 32-bits.
Assuming a partition is O(MiB) in size (so, O(millions) of events), even if
each event has a unique stack ID, it'll fit.
- String IDs follow the same logic as stack IDs.
- Goroutine IDs, when compressed as described in the event cleanup section, will
easily fit in 32 bits.
The full range of possible encodings for an event can be summarized as thus:
This format change, before any compression, will result in an encoded trace size
of 2-3x.
Plugging in the size of each proposed event above into the breakdown of a trace
such as [this 288 MiB one produced by
felixge@](https://gist.github.com/felixge/a79dad4c30e41a35cb62271e50861edc)
reveals an increase in encoded trace size by a little over 2x.
The aforementioned 288 MiB trace grows to around 640 MiB in size, not including
additional timestamp events, the batch header in this scheme, or any tables at
the end of each partition.
This increase in trace size is likely unacceptable since there are several cases
where holding the trace in memory is desirable.
Luckily, the data in this format is very compressible.
This design notably doesn't handle inline strings.
The easy thing to do would be to always put them in the string table, but then
they're held onto until at least the end of a partition, which isn't great for
memory use in the face of many non-cacheable strings.
This design would have to be extended to support inline strings, perhaps
rounding up their length to a multiple of 4 bytes.
#### On-line integer compression
One possible route from here is to simply choose a different integer compression
scheme.
There exist many fast block-based integer compression schemes, ranging from [GVE
and
PrefixVarint](https://en.wikipedia.org/wiki/Variable-length_quantity#Group_Varint_Encoding)
to SIMD-friendly bit-packing schemes.
Inline strings would likely have to be excepted from this compression scheme,
and would likely need to be sent out-of-band.
#### On-demand user compression
Another alternative compression scheme is to have none, and instead expect the
trace consumer to perform their own compression, if they want the trace to be
compressed.
This certainly simplifies the runtime implementation, and would give trace
consumers a choice between trace encoding speed and trace size.
For a sufficiently fast compression scheme (perhaps LZ4), it's possible this
could rival integer compression in encoding overhead.
More investigation on this front is required.
### Becoming a CPU profile superset
Traces already contain CPU profile samples, but are missing some information
compared to what Go CPU profile pprof protos contain.
We should consider bridging that gap, such that it is straightforward to extract
a CPU profile from an execution trace.
This might look like just emitting another section to the trace that contains
basically the entire pprof proto.
This can likely be added as a footer to the trace.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/37112-unstable-runtime-metrics.md | # Proposal: API for unstable runtime metrics
Author: Michael Knyszek
## Background & Motivation
The need for a new API for unstable metrics was already summarized quite well by
@aclements, so I'll quote that here:
> The runtime currently exposes heap-related metrics through
> `runtime.ReadMemStats` (which can be used programmatically) and
> `GODEBUG=gctrace=1` (which is difficult to read programmatically).
> These metrics are critical to understanding runtime behavior, but have some
> serious limitations:
> 1. `MemStats` is hard to evolve because it must obey the Go 1 compatibility
> rules.
> The existing metrics are confusing, but we can't change them.
> Some of the metrics are now meaningless (like `EnableGC` and `DebugGC`),
> and several have aged poorly (like hard-coding the number of size classes
> at 61, or only having a single pause duration per GC cycle).
> Hence, we tend to shy away from adding anything to this because we'll have
> to maintain it for the rest of time.
> 1. The `gctrace` format is unspecified, which means we can evolve it (and have
> completely changed it several times).
> But it's a pain to collect programmatically because it only comes out on
> stderr and, even if you can capture that, you have to parse a text format
> that changes.
> Hence, automated metric collection systems ignore gctrace.
> There have been requests to make this programmatically accessible (#28623).
> There are many metrics I would love to expose from the runtime memory manager
> and scheduler, but our current approach forces me to choose between two bad
> options: programmatically expose metrics that are so fundamental they'll make
> sense for the rest of time, or expose unstable metrics in a way that's
> difficult to collect and process programmatically.
Other problems with `ReadMemStats` include performance, such as the need to
stop-the-world.
While it's otherwise difficult to collect many of the metrics in `MemStats`, not
all metrics require it, and it would be nice to be able to acquire some subset
of metrics without a global application penalty.
## Requirements
Conversing with @aclements, we agree that:
* The API should be easily extendable with new metrics.
* The API should be easily retractable, to deprecate old metrics.
* Removing a metric should not break any Go applications as per the Go 1
compatibility promise.
* The API should be discoverable, to obtain a list of currently relevant
metrics.
* The API should be rich, allowing a variety of metrics (e.g. distributions).
* The API implementation should minimize CPU/memory usage, such that it does not
appreciably affect any of the metrics being measured.
* The API should include useful existing metrics already exposed by the runtime.
## Goals
Given the requirements, I suggest we prioritize the following concerns when
designing the API in the following order.
1. Extensibility.
* Metrics are "unstable" and therefore it should always be compatible to add
or remove metrics.
* Since metrics will tend to be implementation-specific, this feature is
critical.
1. Discoverability.
* Because these metrics are "unstable," there must be a way for the
application, and for the human writing the application, to discover the
set of usable metrics and be able to do something useful with that
information (e.g. log the metric).
* The API should enable collecting a subset of metrics programmatically.
For example, one might want to "collect all memory-related metrics" or
"collect all metrics which are efficient to collect".
1. Performance.
* Must have a minimized effect on the metrics it returns in the
steady-state.
* Should scale up to 100s metrics, an amount that a human might consider "a
lot."
* Note that picking the right types to expose can limit the amount of
metrics we need to expose.
For example, a distribution type would significantly reduce the number
of metrics.
1. Ergonomics.
* The API should be as easy to use as it can be, given the above.
## Design
I propose we add a new standard library package to support a new runtime metrics
API to avoid polluting the namespace of existing packages.
The proposed name of the package is the `runtime/metrics` package.
I propose that this package expose a sampling-based API for acquiring runtime
metrics, in the same vein as `runtime.ReadMemStats`, that meets this proposal's
stated goals.
The sampling approach is taken in opposition to a stream-based (or event-based)
API.
Many of the metrics currently exposed by the runtime are "continuous" in the
sense that they're cheap to update and are updated frequently enough that
emitting an event for every update would be quite expensive, and would require
scaffolding to allow the user to control the emission rate.
Unless noted otherwise, this document will assume a sampling-based API.
With that said, I believe that in the future it will be worthwhile to expose an
event-based API as well, taking a hybrid approach, much like Linux's `perf`
tool.
See "Time series data" for a discussion of such an extension.
### Representation of metrics
Firstly, it probably makes the most sense to interact with a set of metrics,
rather than one metric at a time.
Many metrics require that the runtime reach some safe state to collect, so
naturally it makes sense to collect all such metrics at this time for
performance.
For the rest of this document, we're going to consider "sets of metrics" as the
unit of our API instead of individual metrics for this reason.
Second, the extendability and retractability requirements imply a less rigid
data structure to represent and interact with a set of metrics.
Perhaps the least rigid data structure in Go is something like a byte slice, but
this is decidedly too low-level to use from within a Go application because it
would need to have an encoding.
Simply defining a new encoding for this would be a non-trivial undertaking with
its own complexities.
The next least-rigid data structure is probably a Go map, which allows us to
associate some key for a metric with a sampled metric value.
The two most useful properties of maps here is that their set of keys is
completely dynamic, and that they allow efficient random access.
The inconvenience of a map though is its undefined iteration order.
While this might not matter if we're just constructing an RPC message to hit an
API, it does matter if one just wants to print statistics to STDERR every once
in a while for debugging.
A slightly more rigid data structure would be useful for managing an unstable
set of metrics is a slice of structs, with each struct containing a key (the
metric name) and a value.
This allows us to have a well-defined iteration order, and it's up to the user
if they want efficient random access.
For example, they could keep the slice sorted by metric keys, and do a binary
search over them, or even have a map on the side.
There are several variants of this slice approach (e.g. struct of keys slice and
values slice), but I think the general idea of using slices of key-value pairs
strikes the right balance between flexibility and usability.
Going any further in terms of rigidity and we end up right where we don't want
to be: with a `MemStats`-like struct.
Third, I propose the metric key be something abstract but still useful for
humans, such as a string.
An alternative might be an integral ID, where we provide a function to obtain a
metric's name from its ID.
However, using an ID pollutes the API.
Since we want to allow a user to ask for specific metrics, we would be required
to provide named constants for each metric which would later be deprecated.
It's also unclear that this would give any performance benefit at all.
Finally, we want the metric value to be able to take on a variety of forms.
Many metrics might work great as `uint64` values, but most do not.
For example we might want to collect a distribution of values (size classes are
one such example).
Distributions in particular can take on many different forms, for example if we
wanted to have an HDR histogram of STW pause times.
In the interest of being as extensible as possible, something like an empty
interface value could work here.
However, an empty interface value has implications for performance.
How do we efficiently populate that empty interface value without allocating?
One idea is to only use pointer types, for example it might contain `*float64`
or `*uint64` values.
While this strategy allows us to re-use allocations between samples, it's
starting to rely on the internal details of Go interface types for efficiency.
Fundamentally, the problem we have here is that we want to include a fixed set
of valid types as possible values.
This concept maps well to the notion of a sum type in other languages.
While Go lacks such a facility, we can emulate one.
Consider the following representation for a value:
```go
type Kind int
const (
KindBad Kind = iota
KindUint64
KindFloat64
KindFloat64Histogram
)
type Value struct {
// unexported fields
}
func (v Value) Kind() Kind
// panics if v.Kind() != KindUint64
func (v Value) Uint64() uint64
// panics if v.Kind() != KindFloat64
func (v Value) Float64() float64
// panics if v.Kind() != KindFloat64Histogram
func (v Value) Float64Histogram() *Float64Histogram
```
The advantage of such a representation means that we can hide away details about
how each metric sample value is actually represented.
For example, we could embed a `uint64` slot into the `Value` which is used to
hold either a `uint64`, a `float64`, or an `int64`, and which is populated
directly by the runtime without any additional allocations at all.
For types which will require an indirection, such as histograms, we could also
hold an `unsafe.Pointer` or `interface{}` value as an unexported field and pull
out the correct type as needed.
In these cases we would still need to allocate once up-front (the histogram
needs to contain a slice for counts, for example).
The downside of such a structure is mainly ergonomics.
In order to use it effectively, one needs to `switch` on the result of the
`Kind()` method, then call the appropriate method to get the underlying value.
While in that case we lose some type safety as opposed to using an `interface{}`
and a type-switch construct, there is some precedent for such a structure.
In particular a `Value` mimics the API `reflect.Value` in some ways.
Putting this all together, I propose sampled metric values look like
```go
// Sample captures a single metric sample.
type Sample struct {
Name string
Value Value
}
```
Furthermore, I propose that we use a slice of these `Sample` structures to
represent our "snapshot" of the current state of the system (i.e. the
counterpart to `runtime.MemStats`).
### Discoverability
To support discovering which metrics the system supports, we must provide a
function that returns the set of supported metric keys.
I propose that the discovery API return a slice of "metric descriptions" which
contain a "Name" field referring to a metric key.
Using a slice here mirrors the sampling API.
#### Metric naming
Choosing a naming scheme for each metric will significantly influence its usage,
since these are the names that will eventually be surfaced to the user.
There are two important properties we would like to have such that these metric
names may be smoothly and correctly exposed to the user.
The first, and perhaps most important of these properties is that semantics be
tied to their name.
If the semantics (including the type of each sample value) of a metric changes,
then the name should too.
The second is that the name should be easily parsable and mechanically
rewritable, since different metric collection systems have different naming
conventions.
Putting these two together, I propose that the metric name be built from two
components: a forward-slash-separated path to a metric where each component is
lowercase words separated by hyphens (the "name", e.g. "/memory/heap/free"), and
its unit (e.g. bytes, seconds).
I propose we separate the two components of "name" and "unit" by a colon (":")
and provide a well-defined format for the unit (e.g. "/memory/heap/free:bytes").
Representing the metric name as a path is intended to provide a mechanism for
namespacing metrics.
Many metrics naturally group together, and this provides a straightforward way
of filtering out only a subset of metrics, or perhaps matching on them.
The use of lower-case and hyphenated path components is intended to make the
name easy to translate to most common naming conventions used in metrics
collection systems.
The introduction of this new API is also a good time to rename some of the more
vaguely named statistics, and perhaps to introduce a better namespacing
convention.
Including the unit in the name may be a bit surprising at first.
First of all, why should the unit even be a string? One alternative way to
represent the unit is to use some structured format, but this has the potential
to lock us into some bad decisions or limit us to only a certain subset of
units.
Using a string gives us more flexibility to extend the units we support in the
future.
Thus, I propose that no matter what we do, we should definitely keep the unit as
a string.
In terms of a format for this string, I think we should keep the unit closely
aligned with the Go benchmark output format to facilitate a nice user experience
for measuring these metrics within the Go testing framework.
This goal suggests the following very simple format: a series of all-lowercase
common base unit names, singular or plural, without SI prefixes (such as
"seconds" or "bytes", not "nanoseconds" or "MiB"), potentially containing
hyphens (e.g. "cpu-seconds"), delimited by either `*` or `/` characters.
A regular expression is sufficient to describe the format, and ignoring the
restriction of common base unit names, would look like
`^[a-z-]+(?:[*\/][a-z-]+)*$`.
Why should the unit be a part of the name? Mainly to help maintain the first
property mentioned above.
If we decide to change a metric's unit, which represents a semantic change, then
the name must also change.
Also, in this situation, it's much more difficult for a user to forget to
include the unit.
If their metric collection system has no rules about names, then great, they can
just use whatever Go gives them.
If they do (and most seem to be fairly opinionated) it forces the user to
account for the unit when dealing with the name and it lessens the chance that
it would be forgotten.
Furthermore, splitting a string is typically less computationally expensive than
combining two strings.
#### Metric Descriptions
Firstly, any metric description must contain the name of the metric.
No matter which way we choose to store a set of descriptions, it is both useful
and necessary to carry this information around.
Another useful field is an English description of the metric.
This description may then be propagated into metrics collection systems
dynamically.
The metric description should also indicate the performance sensitivity of the
metric.
Today `ReadMemStats` forces the user to endure a stop-the-world to collect all
metrics.
There are a number of pieces of information we could add, but one good one for
now would be "does this metric require a stop-the-world event?".
The intended use of such information would be to collect certain metrics less
often, or to exclude them altogether from metrics collection.
While this is fairly implementation-specific for metadata, the majority of
tracing GC designs involve a stop-the-world event at one point or another.
Another useful aspect of a metric description would be to indicate whether the
metric is a "gauge" or a "counter" (i.e. it increases monotonically).
We have examples of both in the runtime and this information is often useful to
bubble up to metrics collection systems to influence how they're displayed and
what operations are valid on them (e.g. counters are often more usefully viewed
as rates).
By including whether a metric is a gauge or a counter in the descriptions,
metrics collection systems don't have to try to guess, and users don't have to
annotate exported metrics manually; they can do so programmatically.
Finally, metric descriptions should allow users to filter out metrics that their
application can't understand.
The most common situation in which this can happen is if a user upgrades or
downgrades the Go version their application is built with, but they do not
update their code.
Another situation in which this can happen is if a user switches to a different
Go runtime (e.g. TinyGo).
There may be a new metric in this Go version represented by a type which was not
used in previous versions.
For this case, it's useful to include type information in the metric description
so that applications can programmatically filter these metrics out.
In this case, I propose we use add a `Kind` field to the description.
#### Documentation
While the metric descriptions allow an application to programmatically discover
the available set of metrics at runtime, it's tedious for humans to write an
application just to dump the set of metrics available to them.
For `ReadMemStats`, the documentation is on the `MemStats` struct itself.
For `gctrace` it is in the runtime package's top-level comment.
Because this proposal doesn't tie metrics to Go variables or struct fields, the
best we can do is what `gctrace` does and document it in the metrics
package-level documentation.
A test in the `runtime/metrics` package will ensure that the documentation
always matches the metric's English description.
Furthermore, the documentation should contain a record of when metrics were
added and when metrics were removed (such as a note like "(since Go 1.X)" in the
English description).
Users who are using an old version of Go but looking at up-to-date
documentation, such as the documentation exported to golang.org, will be able to
more easily discover information relevant to their application.
If a metric is removed, the documentation should note which version removed it.
### Time series metrics
The API as described so far has been a sampling-based API, but many metrics are
updated at well-defined (and relatively infrequent) intervals, such as many of
the metrics found in the `gctrace` output.
These metrics, which I'll call "time series metrics," may be sampled, but the
sampling operation is inherently lossy.
In many cases it's very useful for performance debugging to have precise
information of how a metric might change e.g. from GC cycle to GC cycle.
Measuring such metrics thus fits better in an event-based, or stream-based API,
which emits a stream of metric values (tagged with precise timestamps) which are
then ingested by the application and logged someplace.
While we stated earlier that considering such time series metrics is outside of
the scope of this proposal, it's worth noting that buying into a sampling-based
API today does not close any doors toward exposing precise time series metrics
in the future.
A straightforward way of extending the API would be to add the time series
metrics to the total list of metrics, allowing the usual sampling-based approach
if desired, while also tagging some metrics with a "time series" flag in their
descriptions.
The event-based API, in that form, could then just be a pure addition.
A feasible alternative in this space is to only expose a sampling API, but to
include a timestamp on event metrics to allow users to correlate metrics with
specific events.
For example, if metrics came from the previous GC, they would be tagged with the
timestamp of that GC, and if the metric and timestamp hadn't changed, the user
could identify that.
One interesting consequence of having an event-based API which is prompt is that
users could then to Go runtime state on-the-fly, such as for detecting when the
GC is running.
On the one hand, this could provide value to some users of Go, who require
fine-grained feedback from the runtime system.
On the other hand, the supported metrics will still always be unstable, so
relying on a metric for feedback in one release might no longer be possible in a
future release.
## Draft API Specification
Given the discussion of the design above, I propose the following draft API
specification.
```go
package metrics
// Float64Histogram represents a distribution of float64 values.
type Float64Histogram struct {
// Counts contains the weights for each histogram bucket. The length of
// Counts is equal to the length of Bucket plus one to account for the
// implicit minimum bucket.
//
// Given N buckets, the following is the mathematical relationship between
// Counts and Buckets.
// count[0] is the weight of the range (-inf, bucket[0])
// count[n] is the weight of the range [bucket[n], bucket[n+1]), for 0 < n < N-1
// count[N-1] is the weight of the range [bucket[N-1], inf)
Counts []uint64
// Buckets contains the boundaries between histogram buckets, in increasing order.
//
// Because this slice contains boundaries, there are len(Buckets)+1 total buckets:
// a bucket for all values less than the first boundary, a bucket covering each
// [slice[i], slice[i+1]) interval, and a bucket for all values greater than or
// equal to the last boundary.
Buckets []float64
}
// Clone generates a deep copy of the Float64Histogram.
func (f *Float64Histogram) Clone() *Float64Histogram
// Kind is a tag for a metric Value which indicates its type.
type Kind int
const (
// KindBad indicates that the Value has no type and should not be used.
KindBad Kind = iota
// KindUint64 indicates that the type of the Value is a uint64.
KindUint64
// KindFloat64 indicates that the type of the Value is a float64.
KindFloat64
// KindFloat64Histogram indicates that the type of the Value is a *Float64Histogram.
KindFloat64Histogram
)
// Value represents a metric value returned by the runtime.
type Value struct {
kind Kind
scalar uint64 // contains scalar values for scalar Kinds.
pointer unsafe.Pointer // contains non-scalar values.
}
// Value returns a value of one of the types mentioned by Kind.
//
// This function may allocate memory.
func (v Value) Value() interface{}
// Kind returns the a tag representing the kind of value this is.
func (v Value) Kind() Kind
// Uint64 returns the internal uint64 value for the metric.
//
// If v.Kind() != KindUint64, this method panics.
func (v Value) Uint64() uint64
// Float64 returns the internal float64 value for the metric.
//
// If v.Kind() != KindFloat64, this method panics.
func (v Value) Float64() float64
// Float64Histogram returns the internal *Float64Histogram value for the metric.
//
// The returned value may be reused by calls to Read, so the user should clone
// it if they intend to use it across calls to Read.
//
// If v.Kind() != KindFloat64Histogram, this method panics.
func (v Value) Float64Histogram() *Float64Histogram
// Description describes a runtime metric.
type Description struct {
// Name is the full name of the metric, including the unit.
//
// The format of the metric may be described by the following regular expression.
// ^(?P<name>/[^:]+):(?P<unit>[^:*\/]+(?:[*\/][^:*\/]+)*)$
//
// The format splits the name into two components, separated by a colon: a path which always
// starts with a /, and a machine-parseable unit. The name may contain any valid Unicode
// codepoint in between / characters, but by convention will try to stick to lowercase
// characters and hyphens. An example of such a path might be "/memory/heap/free".
//
// The unit is by convention a series of lowercase English unit names (singular or plural)
// without prefixes delimited by '*' or '/'. The unit names may contain any valid Unicode
// codepoint that is not a delimiter.
// Examples of units might be "seconds", "bytes", "bytes/second", "cpu-seconds",
// "byte*cpu-seconds", and "bytes/second/second".
//
// A complete name might look like "/memory/heap/free:bytes".
Name string
// Cumulative is whether or not the metric is cumulative. If a cumulative metric is just
// a single number, then it increases monotonically. If the metric is a distribution,
// then each bucket count increases monotonically.
//
// This flag thus indicates whether or not it's useful to compute a rate from this value.
Cumulative bool
// Kind is the kind of value for this metric.
//
// The purpose of this field is to allow users to filter out metrics whose values are
// types which their application may not understand.
Kind Kind
// StopTheWorld is whether or not the metric requires a stop-the-world
// event in order to collect it.
StopTheWorld bool
}
// All returns a slice of containing metric descriptions for all supported metrics.
func All() []Description
// Sample captures a single metric sample.
type Sample struct {
// Name is the name of the metric sampled.
//
// It must correspond to a name in one of the metric descriptions
// returned by Descriptions.
Name string
// Value is the value of the metric sample.
Value Value
}
// Read populates each Value element in the given slice of metric samples.
//
// Desired metrics should be present in the slice with the appropriate name.
// The user of this API is encouraged to re-use the same slice between calls.
//
// Metric values with names not appearing in the value returned by Descriptions
// will simply be left untouched (Value.Kind == KindBad).
func Read(m []Sample)
```
The usage of the API we have in mind for collecting specific metrics is the
following:
```go
var stats = []metrics.Sample{
{Name: "/gc/heap/goal:bytes"},
{Name: "/gc/pause-latency-distribution:seconds"},
}
// Somewhere...
...
go statsLoop(stats, 30*time.Second)
...
func statsLoop(stats []metrics.Sample, d time.Duration) {
// Read and print stats every 30 seconds.
ticker := time.NewTicker(d)
for {
metrics.Read(stats)
for _, sample := range stats {
split := strings.IndexByte(sample.Name, ':')
name, unit := sample.Name[:split], sample.Name[split+1:]
switch value.Kind() {
case KindUint64:
log.Printf("%s: %d %s", name, value.Uint64(), unit)
case KindFloat64:
log.Printf("%s: %d %s", name, value.Float64(), unit)
case KindFloat64Histogram:
v := value.Float64Histogram()
m := computeMean(v)
log.Printf("%s: %f avg %s", name, m, unit)
default:
log.Printf("unknown value %s:%s: %v", sample.Value())
}
}
<-ticker.C
}
}
```
I believe common usage will be to simply slurp up all metrics, which would look
like this:
```go
...
// Generate a sample array for all the metrics.
desc := metrics.All()
stats := make([]metric.Sample, len(desc))
for i := range desc {
stats[i] = metric.Sample{Name: desc[i].Name}
}
go statsLoop(stats, 30*time.Second)
...
```
## Proposed initial list of metrics
### Existing metrics
```
/memory/heap/free:bytes KindUint64 // (== HeapIdle - HeapReleased)
/memory/heap/uncommitted:bytes KindUint64 // (== HeapReleased)
/memory/heap/objects:bytes KindUint64 // (== HeapAlloc)
/memory/heap/unused:bytes KindUint64 // (== HeapInUse - HeapAlloc)
/memory/heap/stacks:bytes KindUint64 // (== StackInuse)
/memory/metadata/mspan/inuse:bytes KindUint64 // (== MSpanInUse)
/memory/metadata/mspan/free:bytes KindUint64 // (== MSpanSys - MSpanInUse)
/memory/metadata/mcache/inuse:bytes KindUint64 // (== MCacheInUse)
/memory/metadata/mcache/free:bytes KindUint64 // (== MCacheSys - MCacheInUse)
/memory/metadata/other:bytes KindUint64 // (== GCSys)
/memory/metadata/profiling/buckets-inuse:bytes KindUint64 // (== BuckHashSys)
/memory/other:bytes KindUint64 // (== OtherSys)
/memory/native-stack:bytes KindUint64 // (== StackSys - StackInuse)
/aggregates/total-virtual-memory:bytes KindUint64 // (== sum over everything in /memory/**)
/gc/heap/objects:objects KindUint64 // (== HeapObjects)
/gc/heap/goal:bytes KindUint64 // (== NextGC)
/gc/cycles/completed:gc-cycles KindUint64 // (== NumGC)
/gc/cycles/forced:gc-cycles KindUint64 // (== NumForcedGC)
```
## New GC metrics
```
// Distribution of pause times, replaces PauseNs and PauseTotalNs.
/gc/pause-latency-distribution:seconds KindFloat64Histogram
// Distribution of unsmoothed trigger ratio.
/gc/pacer/trigger-ratio-distribution:ratio KindFloat64Histogram
// Distribution of what fraction of CPU time was spent on GC in each GC cycle.
/gc/pacer/utilization-distribution:cpu-percent KindFloat64Histogram
// Distribution of objects by size.
// Buckets correspond directly to size classes up to 32 KiB,
// after that it's approximated by an HDR histogram.
// allocs-by-size replaces BySize, TotalAlloc, and Mallocs.
// frees-by-size replaces BySize and Frees.
/malloc/allocs-by-size:bytes KindFloat64Histogram
/malloc/frees-by-size:bytes KindFloat64Histogram
// How many hits and misses in the mcache.
/malloc/cache/hits:allocations KindUint64
/malloc/cache/misses:allocations KindUint64
// Distribution of sampled object lifetimes in number of GC cycles.
/malloc/lifetime-distribution:gc-cycles KindFloat64Histogram
// How many page cache hits and misses there were.
/malloc/page/cache/hits:allocations KindUint64
/malloc/page/cache/misses:allocations KindUint64
// Distribution of stack scanning latencies. HDR histogram.
/gc/stack-scan-latency-distribution:seconds KindFloat64Histogram
```
## Scheduler metrics
```
/sched/goroutines:goroutines KindUint64
/sched/preempt/async:preemptions KindUint64
/sched/preempt/sync:preemptions KindUint64
// Distribution of how long goroutines stay in runnable
// before transitioning to running. HDR histogram.
/sched/time-to-run-distribution:seconds KindFloat64Histogram
```
## Backwards Compatibility
Note that although the set of metrics the runtime exposes will not be stable
across Go versions, the API to discover and access those metrics will be.
Therefore, this proposal strictly increases the API surface of the Go standard
library without changing any existing functionality and is therefore Go 1
compatible.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/draft-vulndb.md | # Design Draft: Go Vulnerability Database
Authors: Roland Shoemaker, Filippo Valsorda
[golang.org/design/draft-vulndb](https://golang.org/design/draft-vulndb)
This is a Draft Design, not a formal Go proposal, since it is a
large change that is still flexible.
The goal of circulating this draft design is to collect feedback
to shape an intended eventual proposal.
## Goal
We want to provide a low-noise, reliable way for Go developers to
be alerted of known security vulnerabilities that affect their
applications.
We aim to build a first-party, curated, consistent database of
security vulnerabilities open to community submissions, and
static analysis tooling to surface only the vulnerabilities that
are likely to affect an application, minimizing false positives.
## The database
The vulnerability database will provide entries for known
vulnerabilities in importable (non-main) Go packages in public
modules.
**Curated dataset.**
The database will be actively maintained by the Go Security team,
and will provide consistent metadata and uniform analysis of the
tracked vulnerabilities, with a focus on enabling not just
detection, but also precise impact assessment.
**Basic metadata.**
Entries will include a database-specific unique identifier for
the vulnerability, affected package and version ranges, a coarse
severity grade, and `GOOS`/`GOARCH` if applicable.
If missing, we will also assign a CVE number.
**Targeting metadata.**
Each database entry will include metadata sufficient to enable
detection of impacted downstream applications with low false
positives.
For example, it will include affected symbols (functions,
methods, types, variables…) so that unaffected consumers can be
identified with static analysis.
**Web pages.**
Each vulnerability will link to a web page with the description
of the vulnerability, remediation instructions, and additional
links.
**Source of truth.**
The database will be maintained as a public git repository,
similar to other Go repositories.
The database entries will be available via a stable protocol (see
“The protocol”).
The contents of the repository itself will be in an internal
format which can change without notice.
**Triage process.**
Candidate entries will be sourced from existing streams (such as
the CVE database, and security mailing lists) as well as
community submissions.
Both will be processed by the team to ensure consistent metadata
and analysis.
*We want to specifically encourage maintainers to report
vulnerabilities in their own modules.*
**Not a disclosure process.**
Note that the goal of this database is tracking known, public
vulnerabilities, not coordinating the disclosure of new findings.
## The protocol
The vulnerability database will be served through a simple,
stable HTTPS and JSON-based protocol.
Vulnerabilities will be grouped by module, and an index file will
list the modules with known vulnerabilities and the last time
each entry has been updated.
The protocol will be designed to be served as a collection of
static files, and cacheable by simple HTTP proxies.
The index allows downloading and hosting a full mirror of the
database to avoid leaking module usage information.
Multiple databases can be fetched in parallel, and their entries
are combined, enabling private and commercial databases.
We’ll aim to use an interoperable format.
## The tooling
The primary consumer of the database and the protocol will be a
Go tool, tentatively `go audit`, which will analyze a module and
report what vulnerabilities it’s affected by.
The tool will analyze what vulnerabilities are likely to affect
the current module not only based on the versions of the
dependencies, but also based on the packages and code paths that
are reachable from a configured set of entry points (functions
and methods).
The precision of this analysis will be configurable.
When available, the tool will provide sample traces of how the
vulnerable code is reachable, to aid in assessing impact and
remediation.
The tool accepts a list of packages and reports the
vulnerabilities that affect them (considering as entry points the
`main` and `init` functions for main packages, and exported
functions and methods for non-main packages).
The tool will also support a `-json` output mode, to integrate
reports in other tools, processes such as CI, and UIs, like how
golang.org/x/tools/go/packages tools use `go list -json`.
### Integrations
Besides direct invocations on the CLI and in CI, we want to make
vulnerability entries and audit reports widely available.
The details of each integration involve some open questions.
**vscode-go** will surface reports for vulnerabilities affecting
the workspace and offer easy version bumps.
*Open question*: can vscode-go invoke `go audit`, or do we need a
tighter integration into `gopls`?
**pkg.go.dev** will show vulnerabilities in the displayed
package, and possibly vulnerabilities in its dependencies.
*Open question*: if we analyze transitive dependencies, what
versions should we consider?
At **runtime**, programs will be able to query reports affecting
the dependencies they were built with through `debug.BuildInfo`.
*Open question*: how should applications handle the fact that
runtime reports will have higher false positives due to lack of
source code access?
In the future, we'll also consider integration into other `go`
tool commands, like `go get` and/or `go test`.
Finally, we hope the entries in the database will flow into other
existing systems that provide vulnerability tracking, with their
own integrations.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/17280-profile-labels.md | # Proposal: Support for pprof profiler labels
Author: Michael Matloob
Last updated: 15 May 2017 (to reflect actual implementation)
Discussion at https://golang.org/issue/17280.
## Abstract
This document proposes support for adding labels to pprof profiler records.
Labels are a key-value map that is used to distinguish calls of the same
function in different contexts when looking at profiles.
## Background
[Proposal #16093](golang.org/issue/16093) proposes to generate profiles in the
gzipped profile proto format that's now the standard format pprof expects
profiles to be in.
This format supports adding labels to profile records, but currently the Go
profiler does not produce those labels.
We propose adding a mechanism for setting profiler labels in Go.
These profiler labels are attached to profile samples, which correspond to a
snapshot of a goroutine's stack.
Because of this, we need the labels to be associated with a goroutine so that
they can be accessible at profile sampling time, which may occur during memory
allocation, lock acquisition, or in the handler for SIGPROF, an asynchronous
signal.
## Motivation
Profiles contain a limited amount of context for each sample: essentially the
call stack at the time each sample was taken.
But a user profiling their code may need additional context when debugging a
problem: Was there a particular user or RPC or other context-dependent data that
accounted for the code being executed?
This change allows users to annotate profiles with that information for more
fine-grained profiling.
It is natural to use `context.Context` types to store this information, because
their purpose is to hold context-dependent data. So the `runtime/pprof` package
API adds labels to and changes labels on opaque `context.Context` values.
Supporting profiler labels necessarily changes the runtime package, because
that's where profiling is implemented.
The `runtime` package will expose internal hooks to package `runtime/pprof` which
it uses to implement its `Context`-based API.
One goal of the design is to avoid creating a mechanism that could be used to
implement goroutine-local storage.
That's why it's possible to set profile labels but not retrieve them.
## API
The following types and functions will be added to the
[runtime/pprof](golang.org/pkg/runtime/pprof) package.
package pprof
// SetGoroutineLabels sets the current goroutine's labels to match ctx.
// This is a lower-level API than Do, which should be used instead when possible.
func SetGoroutineLabels(ctx context.Context) {
ctxLabels, _ := ctx.Value(labelContextKey{}).(*labelMap)
runtime_setProfLabel(unsafe.Pointer(ctxLabels))
}
// Do calls f with a copy of the parent context with the
// given labels added to the parent's label map.
// Each key/value pair in labels is inserted into the label map in the
// order provided, overriding any previous value for the same key.
// The augmented label map will be set for the duration of the call to f
// and restored once f returns.
func Do(ctx context.Context, labels LabelSet, f func(context.Context)) {
defer SetGoroutineLabels(ctx)
ctx = WithLabels(ctx, labels)
SetGoroutineLabels(ctx)
f(ctx)
}
// LabelSet is a set of labels.
type LabelSet struct {
list []label
}
// Labels takes an even number of strings representing key-value pairs
// and makes a LabelList containing them.
// A label overwrites a prior label with the same key.
func Labels(args ...string) LabelSet {
if len(args)%2 != 0 {
panic("uneven number of arguments to pprof.Labels")
}
labels := LabelSet{}
for i := 0; i+1 < len(args); i += 2 {
labels.list = append(labels.list, label{key: args[i], value: args[i+1]})
}
return labels
}
// Label returns the value of the label with the given key on ctx, and a boolean indicating
// whether that label exists.
func Label(ctx context.Context, key string) (string, bool) {
ctxLabels := labelValue(ctx)
v, ok := ctxLabels[key]
return v, ok
}
// ForLabels invokes f with each label set on the context.
// The function f should return true to continue iteration or false to stop iteration early.
func ForLabels(ctx context.Context, f func(key, value string) bool) {
ctxLabels := labelValue(ctx)
for k, v := range ctxLabels {
if !f(k, v) {
break
}
}
}
### `Context` changes
Each `Context` may have a set of profiler labels associated with it.
`Do` calls `f` with a new context whose labels map is
the the parent context's labels map with the additional label arguments added.
Consider the tree of function calls during an execution of the program,
treating concurrent and deferred calls like any other. The labels of a
function are those installed by the first call to DoWithLabels found by
walking up from that function toward the root of the tree. Each profiler
sample records the labels of the currently executing function.
### Runtime changes
The profiler will annotate all profile samples of each goroutine by the set of
labels associated with that goroutine.
Two hooks in the runtime, `func runtime_setProfLabel(labels unsafe.Pointer)` and
`func runtime_getProfLabel() unsafe.Pointer` are linknamed
into `runtime/pprof` and are used for setting and getting profile labels from the
current goroutine. These functions are only accessible from `runtime/pprof`, which
prevents them from being misused to implement a Goroutine-local storage facility.
The profile label implementation structure is left opaque to the runtime.
`runtime.CPUProfile` is deprecated. `runtime_pprof_readProfile`,
another runtime function linknamed into `runtime/pprof`, is added as a way for `runtime/pprof` to retrieve the raw label-annotated profile data.
New goroutines inherit the labels set on their creator.
## Compatibility
There are no compatibility issues with this change. The compressed binary format
emitted by the profiler already records labels (see
[proposal 16093](golang.org/issue/16093)), but the profiler does not populate
them.
## Implementation
`context.Context` will have an internal label set representation associated with it.
This leaves the option open to change the implementation in the future to improve
the performance characteristics of using profiler labels.
The initial implementation of the label set is a
`map[string]string` that is copied when new labels are added. However, the
specification permits more sophisticated implementations that scale to large
numbers of label changes such as persistent set structures or diff arrays. This
would allow a set of _n_ labels to be built up in at most
O(_n_ log _n_) time.
This change requires the profile signal handler to interact with pointers, which
means it has to interact with the garbage collector.
There are two complications to this:
1. This requires the profile signal handler to save the label set structure in the
CPU profile structure, which is allocated off-heap.
Addressing this will require either adding the CPU profile structure as a new GC
root, or allocating the CPU profile structure in the garbage-collected heap.
2. Normally, writing the label set structure to the CPU profile structure would
require a write barrier, but write barriers are disallowed in a signal handler.
This can be addressed by treating the CPU profile structure similar to stacks,
which also do not have write barriers.
This could mean a STW re-scan of the CPU profile structure, or shading the old
label set structure when `SetGoroutineLabels` replaces it.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/11502-securitypolicy.md | # Proposal: Security Policy for Go
Author(s): Jason Buberel
Last updated: 2015-07-31
Discussion at https://golang.org/issue/11502.
## Abstract
Go programs are being deployed as part of security-critical applications.
Although Go has a generally good history of being free of security
vulnerabilities, the current process for handling security issues is very
informal. In order to be more transparent and the better coordinate with the
community, I am proposing that the Go project adopt a well-defined security
and vulnerability disclosure policy.
## Background
The Go standard library includes a complete, modern [cryptography
package](https://golang.org/pkg/crypto/). Since the initial release of Go,
there has a single documented security vulnerability [CVE-2014-7189]
(https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-7189) in the crypto
package. This is a promising track record, but as Go usage increases the
language and standard library will come under increasing scrutiny by the
security research community.
In order to better manage security issues, a formal security policy for Go
should be established.
Other language and library open source projects have established security
policies. The following policies were reviewed and considered in the creation
of this proposal:
* [Python Security Policy](https://www.python.org/news/security/)
* [Ruby on Rails Security Policy](http://rubyonrails.org/security/)
* [Rust Security Policy](https://www.rust-lang.org/security.html)
* [Webkit Security Policy](https://www.webkit.org/security/)
* [Xen Project Security Policy](https://www.xenproject.org/security-policy.html)
These policies differ in various aspects, but in general there is a common set
of guidelines that are typically established:
* How security issues should be reported
* Who will be responsible for reviewing these reports
* What is the response time promises made for initial review
* Exactly what steps will be followed for handing issues
* What type of embargo period will be applied
* How will communication of issues be handled, both pre- and post-disclosure
It was also suggested that the Go project consider the use of managed security
services, such as [HackerOne](https://hackerone.com/). The consensus of
commenters on this topic was a reluctance to base the Go process on a third-
party system at this time.
## Proposal
Among the existing security policies reviewed, the [Rust
policy](https://www.rust-lang.org/security.html) is considered a good starting
point. Once adopted, this policy will be hosted at
[https://golang.org/security](https://golang.org/security). The details of the
policy are in the Implementation section below.
## Implementation
### Reporting a Security Bug
Safety is one of the core principles of Go, and to that end, we would like to
ensure that Go has a secure implementation. Thank you for taking the time to
responsibly disclose any issues you find.
All security bugs in the Go distribution should be reported by email to
[security@golang.org](mailto:security@golang.org). This list is delivered to a
small security team. Your email will be acknowledged within 24 hours, and
you'll receive a more detailed response to your email within 72 hours
indicating the next steps in handling your report. If you would like, you can
encrypt your report using our PGP key (listed below).
Please use a descriptive subject line for your report email. After the initial
reply to your report, the security team will endeavor to keep you informed of
the progress being made towards a fix and full announcement. As recommended by
RFPolicy, these updates will be sent at least every five days. In reality,
this is more likely to be every 24-48 hours.
If you have not received a reply to your email within 48 hours, or have not
heard from the security team for the past five days, please contact the
following members of the Go security team directly:
* Contact the primary security coordinator - [Andrew Gerrand]
(mailto:adg@golang.org) - directly.
* Contact the secondary coordinator - [Adam Langley](mailto:agl@google.com) -
[public key](https://www.imperialviolet.org/key.asc) directly.
* Post a message to [golang-dev@golang.org](mailto:golang-dev@golang.org) or
[golang-dev web interface]
(https://groups.google.com/forum/#!forum/golang-dev).
Please note that golang-dev@golang.org is a public discussion forum. When
escalating on this list, please do not disclose the details of the issue.
Simply state that you're trying to reach a member of the security team.
### Flagging Existing Issues as Security-related
If you believe that an [existing issue](https://github.com/golang/go/issues)
is security-related, we ask that you send an email to
[security@golang.org](mailto:security@golang.org). The email
should include the issue ID and a short description of why it should be
handled according to this security policy.
### Disclosure Process
The Go project will use the following disclosure process:
1. Once the security report is received it will be assigned a primary handler.
This person will coordinate the fix and release process.
1. The problem will be confirmed and a list of all affected versions is
determined.
1. Code will be audited to find any potential similar problems.
1. If it is determined, in consultation with the submitter, that a CVE-ID is
required the primary handler will be responsible for obtaining via email
to the [oss-distros]
(http://oss-security.openwall.org/wiki/mailing-lists/distros) list.
1. Fixes will be prepared for the current stable release and the head/master
revision. These fixes will not be committed to the public repository.
1. Details of the issue and patch files will be sent to the
[distros@openwall]
(http://oss-security.openwall.org/wiki/mailing-lists/distros)
mailing list.
1. Three working days following this notification, the fixes will be
applied to the [public repository](https://go.googlesource.com/go) and new
builds deployed to [https://golang.org/dl](https://golang.org/dl)
1. On the date that the fixes are applied, announcements will be sent to
[golang-announce]
(https://groups.google.com/forum/#!forum/golang-announce),
[golang-dev@golang.org](https://groups.google.com/forum/#!forum/golang-dev),
[golang-nuts@golang.org](https://groups.google.com/forum/#!forum/golang-nuts)
and the [oss-security@openwall](http://www.openwall.com/lists/oss-security/).
1. Within 6 hours of the mailing lists being notified, a copy of the advisory
will also be published on the [Go blog](https://blog.golang.org).
This process can take some time, especially when coordination is required with
maintainers of other projects. Every effort will be made to handle the bug in
as timely a manner as possible, however it's important that we follow the
release process above to ensure that the disclosure is handled in a consistent
manner.
For those security issues that include the assignment of a CVE-ID, the issue
will be publicly listed under the ["Golang" product on the CVEDetails
website]
(http://www.cvedetails.com/vulnerability-list/vendor_id-14185/Golang.html)
as well as the [National Vulnerability Disclosure site]
(https://web.nvd.nist.gov/view/vuln/search).
### Receiving Security Updates
The best way to receive security announcements is to subscribe to the
[golang-announce]
(https://groups.google.com/forum/#!forum/golang-announce)
mailing list. Any messages pertaining to a security issue will be prefixed
with `[security]`.
### Comments on This Policy
If you have any suggestions to improve this policy, please send an email to
[golang-dev@golang.org](mailto:golang-dev@golang.org) for discussion.
### Plaintext PGP Key for [security@golang.org](mailto:security@golang.org)
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
Comment: GPGTools - https://gpgtools.org
mQINBFXI1h0BEADZdm05GDFWvjmQKutUVb0cJKS+VR+6XU3g/YQZGC8tnIL6i7te
+fPJHfQc2uIw0xeBgZX4Ni/S8yIqsbIjqYeaToX7QFUufJDQwrmlQRDVAvvT5HBT
J80JEs7yHRreFoLzB6dnWehWXzWle4gFKeIy+hvLrYquZVvbeEYTnX7fNzZg0+5L
ksvj7lnQlJIy1l3sL/7uPr9qsm45/hzd0WjTQS85Ry6Na3tMwRpqGENDh25Blz75
8JgK9JmtTJa00my1zzeCXU04CKKEMRbkMLozzudOH4ZLiLWcFiKRpeCn860wC8l3
oJcyyObuTSbr9o05ra3On+epjCEFkknGX1WxPv+TV34i0a23AtuVyTCloKb7RYXc
7mUaskZpU2rFBqIkzZ4MQJ7RDtGlm5oBy36j2QL63jAZ1cKoT/yvjJNp2ObmWaVF
X3tk/nYw2H0YDjTkTCgGtyAOj3Cfqrtsa5L0jG5K2p4RY8mtVgQ5EOh7QxuS+rmN
JiA39SWh7O6uFCwkz/OCXzqeh6/nP10HAb9S9IC34QQxm7Fhd0ZXzEv9IlBTIRzk
xddSdACPnLE1gJcFHxBd2LTqS/lmAFShCsf8S252kagKJfHRebQJZHCIs6kT9PfE
0muq6KRKeDXv01afAUvoB4QW/3chUrtgL2HryyO8ugMu7leVGmoZhFkIrQARAQAB
tCZHbyBTZWN1cml0eSBUZWFtIDxzZWN1cml0eUBnb2xhbmcub3JnPokCPQQTAQoA
JwUCVcjWHQIbAwUJB4YfgAULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRA6RtGR
eVpYOLnDD/9YVTd6DTwdJq6irVfM/ICPlPTXB0JLERqCI1Veptcp56eQoJ0XWGQp
tkGlgbvmCzFo0B+65Te7YA4R3oyBCXd6JgyWQQPy5p60FHyuuCPVAReclSWyt9f2
Yj/u4DjghKhELOvPiI96egcU3g9jrEEcPjm7JYkc9M2gVSNOnnJvcD7wpQJNCzon
51eMZ1ZyfA5UCBTa0SaT9eXg5zwNlYQnB6ZF6TjXezkhLqlTsBuHxoNVf+9vCC0o
ZKIM2ovptMx9eEguTDKWaQ7tero7Zs/q5fwk/MDzM/LGJ9aXy2RCtqBxv46vDS7G
fCNq+aPD/wyFd6hxQkvkua6hgZwYT+cJWHYA2Yv0LO3BYOJdjfc+j2hjv+mC9lF0
UpWhCVJv3hHoFaxnz62GdROzf2wXz6aR9Saj1rYSvqT9jC20VInxqMufXNN2sbpo
Kyk6MTbAeepphQpfAWQv+ltWgBiEjuFxYdwv/vmw20996JV7O8nqkeCUW84B6su+
Y3bbdP9o3DBtOT0j9LTB/FucmdNCNHoO+EnNBKJd6FoYTGLWi3Rq9DLx2V9tdJHo
Bn67dymcl+iyp337HJNY+qS+KCgoqAWlxkzXRiXKb/yluhXdIkqhg4kL8JPAJvfS
cs7Zn67Mx04ixJnRMYCDmxtD4xPsFMzM7g8m3PQp+nE7WhujM/ImM7kCDQRVyNYd
ARAAlw9H/1ybQs4K3XKA1joII16rta9KS7ew76+agXo0jeSRwMEQfItOxYvfhmo8
+ydn5TWsTbifGU8L3+EBTMRRyzWhbaGO0Wizw7BTVJ7n5JW+ndPrcUpp/ilUk6AU
VxaO/8/R+9+VJZpoeoLHXYloFGNuX58GLIy1jSBvLsLl/Ki5IOrHvD1GK6TftOl5
j8IPC1LSBrwGJO803x7wUdQP/tsKN/QPR8pnBntrEgrQFSI+Q3qrCvVMmXnBlYum
jfOBt8pKMgB9/ix+HWN8piQNQiJxD+XjEM6XwUmQqIR7y5GINKWgundCmtYIzVgY
9p2Br6UPrTJi12LfKv5s2R6NnxFHv/ad29CpPTeLJRsSqFfqBL969BCpj/isXmQE
m4FtziZidARXo12KiGAnPF9otirNHp4+8hwNB3scf7cI53y8nZivO9cwI7BoClY6
ZIabjDcJxjK+24emoz3mJ5SHpZpQLSb9o8GbLLfXOq+4uzEX2A30fhrtsQb/x0GM
4v3EU1aP2mjuksyYbgldtY64tD35wqAA9mVl5Ux+g1HoUBvLw0h+lzwh370NJw//
ITvBQVUtDMB96rfIP4fL5pYl5pmRz+vsuJ0iXzm05qBgKfSqO7To9SWxQPdX89R4
u0/XVAlw0Ak9Zceq3W96vseEUTR3aoZCMIPiwfcDaq60rWUAEQEAAYkCJQQYAQoA
DwUCVcjWHQIbDAUJB4YfgAAKCRA6RtGReVpYOEg/EADZcIYw4q1jAbDkDy3LQG07
AR8QmLp/RDp72RKbCSIYyvyXEnmrhUg98lUG676qTH+Y7dlEX107dLhFuKEYyV8D
ZalrFQO/3WpLWdIAmWrj/wq14qii1rgmy96Nh3EqG3CS50HEMGkW1llRx2rgBvGl
pgoTcwOfT+h8s0HlZdIS/cv2wXqwPgMWr1PIk3as1fu1OH8n/BjeGQQnNJEaoBV7
El2C/hz3oqf2uYQ1QvpU23F1NrstekxukO8o2Y/fqsgMJqAiNJApUCl/dNhK+W57
iicjvPirUQk8MUVEHXKhWIzYxon6aEUTx+xyNMBpRJIZlJ61FxtnZhoPiAFtXVPb
+95BRJA9npidlVFjqz9QDK/4NSnJ3KaERR9tTDcvq4zqT22Z1Ai5gWQKqogTz5Mk
F+nZwVizW0yi33id9qDpAuApp8o6AiyH5Ql1Bo23bvqS2lMrXPIS/QmPPsA76CBs
lYjQwwz8abUD1pPdzyYtMKZUMwhicSFOHFDM4oQN16k2KJuntuih8BKVDCzIOq+E
KHyeh1BqWplUtFh1ckxZlXW9p9F7TsWjtfcKaY8hkX0Cr4uVjwAFIjLcAxk67ROe
huEb3Gt+lwJz6aNnZUU87ukMAxRVR2LL0btdxgc6z8spl66GXro/LUkXmAdyOEMV
UDrmjf9pr7o00hC7lCHFzw==
=WE0r
-----END PGP PUBLIC KEY BLOCK-----
```
## Rationale
### Early Disclosure
The Go security policy does not contain a provision for the early disclosure
of vulnerabilities to a small set of "trusted" partners. The Xen and WebKit
policies do contain provisions for this. According to several members of the
security response team at Google (Ben Laurie, Adam Langley), it is incredibly
difficult to retain secrecy of embargoed issues once they have been shared
with even a small number of partners.
### Security Review Team Membership
The Go security policy does not contain formal provisions for nomination or
removal of members of the security review team. WebKit, for example, specifies
how new members can become members of the security review team. This may be
needed for the Go project at some point in the future; it does not seem
necessary at this time.
## Open issues
* PGP key pair needed for security@golang.org address.
* Need to designate a primary and secondary alternative contact.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/36606-64-bit-field-alignment.md | # Proposal: Make 64-bit fields be 64-bit aligned on 32-bit systems, add //go:packed, //go:align directives
Author(s): Dan Scales (with input from many others)
Last updated: 2020-06-08
Initial proposal and discussion at: https://github.com/golang/go/issues/36606
## Abstract
We propose to change the default layout of structs on 32-bit systems such that
64-bit fields will be 8-byte (64-bit) aligned. The layout of structs on 64-bit systems
will not change. For compatibility reasons (and finer control of struct layout),
we also propose the addition of a `//go:packed` directive that applies to struct
types. When the `//go:packed` directive is specified immediately before a struct
type, then that struct will have a fully-packed layout, where fields are placed
in order with no padding between them and hence no alignment based on types. The
developer must explicitly add padding to enforce any desired alignment. We also
propose the addition of a `//go:align` directive that applies to all types. It
sets the required alignment in bytes for the associated type. This directive
will be useful in general, but specifically can be used to set the required
alignment for packed structs.
## Background
Currently, each Go type has a required alignment in bytes.
This alignment is used for setting the alignment for any
variable of the associated type (whether global variable, local variable,
argument, return value, or heap allocation) or a field of the associated type in
a struct. The actual alignments are implementation-dependent. In gc, the alignment
can be 1, 2, 4, or 8 bytes. The alignment determination is fairly straightforward,
and mostly encapsulated in the functions `gc.dowidth()` and `gc.widstruct()` in
`cmd/compile/internal/gc/align.go`. gccgo and GoLLVM have slightly different alignments
for some types. This proposal is focused on the alignments used in gc.
The alignment rules differ slightly between 64-bit and 32-bit systems. Of
course, certain types (such as pointers and integers) have different sizes
and hence different alignments. The main other difference is the treatment of
64-bit basic types such as `int64`, `uint64`, and `float64`. On 64-bit systems, the
alignment of 64-bit basic types is 8 bytes (64 bits), while on 32-bit systems,
the alignment of these types is 4 bytes (32 bits). This means that fields in a
struct on a 32-bit system that have a 64-bit basic type may be aligned only to 4
bytes, rather than to 8 bytes.
There are a few more alignment rules for global variables and heap-allocated
locations. Any heap-allocated type that is 8 bytes or more is always aligned on
an 8-byte boundary, even on 32-bit systems (see `runtime.mallocgc`).
Any global variable which is a
64-bit base type or is a struct also always is aligned on 8 bytes (even on
32-bit systems). Hence, the above alignment difference for 64-bit base types
between 64-bit and 32-bit systems really only occurs for fields in a struct and
for stack variables (including arguments and return values).
The main goal of this change is to avoid the bugs that frequently happen on
32-bit systems, where a developer wants to be able to do 64-bit operations (such
as an atomic operation) on a 64-bit field, but gets an alignment error because
the field is not 8-byte aligned. With the current struct layout rules (based on
the current type alignment rules), a developer must often add explicit padding
in order to make sure that such a 64-bit field is on a 8-byte boundary. As shown
by repeated mentions in issue [#599](https://github.com/golang/go/issues/599)
(18 in 2019 alone), developers still often run into this problem. They may only
run into it late in the development cycle as they are testing on 32-bit
architectures, or when they execute an uncommon code path that requires the
alignment.
As an example, the struct for ticks in `runtime/runtime.go` is declared as
```go
var ticks struct {
lock mutex
pad uint32 // ensure 8-byte alignment of val on 386
val uint64
}
```
so that the `val` field is properly aligned on 32-bit architectures.
Note that there can also be alignment issues with stack variables which have
64-bit base types, but it seems less likely that a program would be using a local
variable for 64-bit operations such as atomic operations.
There are related reasons why a developer might want to explicitly control the
alignment of a specific type (possibly to an alignment even great than 8 bytes),
as detailed in issue [#19057](https://github.com/golang/go/issues/19057). As
mentioned in that issue, "on x86 there are vector instructions that require
alignment to 16 bytes, and there are even some instructions (e.g., vmovaps with
VEC.256), that require 32 byte alignment." It is also possible that a developer
might want to force alignment of a type to be on a cache line boundary, to
improve locality and avoid false sharing (e.g. see `cpu.CacheLinePad` in the
`runtime` package sources). Cache line sizes typically range from
32 to 128 bytes.
## Proposal
This proposal consists of a proposed change to the default alignment rules on 32-bit systems,
a new `//go:packed` directive, and a new `//go:align` directive. We describe each in a
separate sub-section.
### Alignment changes
The main part of our proposal is the following:
* We change the default alignment of 64-bit fields in structs on 32-bit systems
from 4 bytes to 8 bytes
* We do not change the alignment of 64-bit base types otherwise (i.e. for stack
variables, global variables, or heap allocations)
Since the alignment of a struct is based on the maximum alignment of any field
in the struct, this change will also change the overall alignment of certain
structs on 32-bit systems from 4 to 8 bytes.
It is important that we do not change the alignment of stack variables
(particular arguments and return values), since changing their alignment would
directly change the Go calling ABI. (As we’ll note below, we are still
changing the ABI in a minor way, since we are changing the layout and possibly
the size of some structs that could be passed as arguments and return values.)
As mentioned above, 64-bit basic types are already aligned to 8 bytes
(based on other rules) for global variables or heap allocations. Therefore, we
do not usually run into alignment problems for 64-bit basic types on 32-bit
systems when they are simple global variables or heap allocations.
One way to think about this change is that each type has two alignment
properties, analogous to the `Type.FieldAlign()` and `Type.Align()`
methods in the `reflect` package. The first property specifies
the alignment when that type
occurs as a field in a struct. The second property specifies the
alignment when that type is used in any other situation, including stack
variables, global variables, and heap allocations. For almost all types, the
field alignment and the "other" alignment of each type will be equal to each
other and the same as it is today. However, in this proposal, 64-bit basic types
(`int64`, `uint64`, and `float64`) on 32-bit system will have a field alignment of 8
bytes, but keep an "other" alignment of 4 bytes. As we mentioned, structs that
contain 64-bit basic types on 32-bit systems may have 8-byte alignment now where
previously they had 4-byte alignment; however, both their field alignment and
their "other" alignment would have this new value.
### Addition of a `//go:packed` directive
We make the above proposed change in order to reduce the kind of bugs detailed in issue
[#599](https://github.com/golang/go/issues/599). However, we need to maintain
explicit compatibility for struct layout in some important situations.
Therefore, we also propose the following:
* We add a new Go directive `//go:packed` which applies to the immediately
following struct type. That struct type will have a fully-packed layout,
where fields are placed in order with no padding between them and hence no
alignment based on types.
The `//go:packed` property will become part of the following struct type
that it applies to. In particular, we will not allow assignment/conversion from
a struct type to the equivalent packed struct type, and vice versa.
The `//go:packed` property only applies to the struct type being defined. It does
not apply to any struct type that is embedded in the type being defined. Any
embedded struct type must similarly be defined with the `//go:packed` property if
it is to be packed either on its own or inside another struct definition.
`//go:packed` will be ignored if it appears anywhere
else besides immediately preceding a struct type definition.
The idea with the `//go:packed` directive is to give the developer complete
control over the layout of a struct. In particular, the developer (or a
code-generating program) can add padding so that the fields of a packed struct are laid
out in exactly the same way as they were in Go 1.15 (i.e. without the above
proposed alignment change). Matching the exact layout as in Go 1.15 is needed
in some specific situations:
1. Matching the layout of some Syscall structs, such as `Stat_t` and `Flock_t` on linux/386.
On 32-bit systems, these two structs actually have 64-bit fields that are
not aligned on 8-byte boundaries. Since these structs are the exact
struct used to interface with Linux syscalls, they must have exactly the
specified layout. With this proposal, any structs passed to syscall.Syscall
should be laid out exactly using `//go:packed`.
2. Some cgo structs (which are used to match declared C structs) may also
have 64-bit fields that are not aligned on 8-byte boundaries. So, with this
proposal, cgo should use `//go:packed` for generated Go structs that must
exactly match the layout of C structs. In fact, there are currently C
structs that cannot be matched exactly by Go structs, because of the current
(Go 1.15) alignment rules. With the use of `//go:packed`, cgo will now be able
to match exactly the layout of any C struct (unless the struct uses bitfields).
Note that there is possibly assembly language code in some Go programs or
libraries that directly accesses the fields of a struct using hard-wired
offsets, rather than offsets obtained from a `go_asm.h` file. If that struct has
64-bit fields, then the offsets of those fields may change on 32-bit systems
with this proposal. In that case, then the assembly code may break. In that
case, we strongly recommend rewriting the assembly language code to use offsets
from `go_asm.h` (or using values obtained from from Go code via
`unsafe.Offsetof`). We would not recommend forcing the layout of the struct to
remain the same by using `//go:packed` and appropriate padding.
### Addition of a `//go:align` directive
One issue with the `//go:packed` idea is determining the overall alignment of
a packed struct. Currently, the overall alignment of a struct is computed as
the maximum alignment of any of its fields. In the case of `//go:packed`, the
alignment of each field is essentially 1. Therefore, conceptually, the overall
alignment of a packed struct is 1. We could therefore consider that we need to
explicitly specify the alignment of a packed struct.
As we mentioned above, there are other reasons why developers would like to
specify an explicit alignment for a Go type. For both of these reasons, we
therefore propose a method to specify the alignment of a Go type:
* We add a new Go directive `//go:align` N, which applies to the immediately
following type, where N can be any positive power of 2. The following
type will have the specified required alignment, or the natural alignment
of the type, whichever is larger. It will be a compile-time
error if N is missing or is not a positive power of 2. `//go:packed` and `//go:align`
can appear in either order if they are both specified preceding a struct type.
In order to work well with memory allocators, etc., we only allow alignments
that are powers of 2. There will probably have to be some practical upper limit on
the possible value of N. Even for the purposes of aligning to cache lines, we would
likely only need alignment up to 128 bytes.
One issue with allowing otherwise identical types to have different alignments
is the question of when pointers to these types can be converted. Consider the
following example:
```go
type A struct {
x, y, z, w int32
}
type B struct {
x, y, z, w int32
}
//go:align 8
type C struct {
x, y, z, w int32
}
a := &A{}
b := (*B)(a) // conversion 1
c := (*C)(a) // conversion 2
```
As in current Go, conversion 1 should certainly be allowed. However, it is not
clear that conversion 2 should be allowed, since object `a` may not be
aligned to 8 bytes, so it may not satisfy the alignment property of `C` if `a`
is assigned to `c`. Although this issue of convertability applies only to pointers of aligned
structs, it seems simplest and most consistent to include alignment as part of the base
type that it applies to. We would therefore disallow converting from `A` to `C` and vice versa.
We propose the following:
* An alignment directive becomes part of the type that it applies to, and makes that type
distinct from an otherwise identical type with a different (or no) alignment.
With this proposal, types `(*C)` and `(*A)` are not convertible, despite pointing to
structs that look identical, because the alignment of the structs to which they
point are different. Therefore, conversion 2 would cause a compile-time error. Similarly,
conversion between `A` and `C` would be disallowed.
### Vet and compiler checks
Finally, we would like to help ensure that `//go:packed` is used in
cases where struct layout must maintain strict compatibility with Go 1.15. As
mentioned above, the important cases where compatibility must be maintained are structs
passed to syscalls and structs used with
Cgo. Therefore, we propose the addition of the following vet check:
* New 'go vet' pass that requires the usage of `//go:packed` on a struct if a
pointer to that struct type is passed to syscall.Syscall (or its variants) or to cgo.
The syscall check should cover most of the supported OSes (including Windows and
Windows DLLs), but we may have to extend the vet check if there are other ways
to call native OS functions. For example, in Windows, we may also want to cover
the `(*LazyProc).Call` API for calling DLL functions.
We could similarly have an error if a pointer to a non-packed struct type is
passed to an assembly language function, though that warning might have a lot of
false positives. Possibly we would limit warnings to such assembly language
functions that clearly do not make use of `go_asm.h` definitions.
We intend that `//go:packed` should only be used in limited situations, such as
controlling exact layout of structs used in syscalls or in cgo. It is possible
to cause bugs or performance problems if it is not used correctly. In
particular, there could be problems with garbage collection if fields containing
pointers are not aligned to the standard pointer alignment. Therefore,
we propose the following compiler and vet checks:
* The compiler will give a compile-time error if the fields of a packed struct are aligned
incorrectly for garbage collection or hardware-specific needs. In particular, it will be
an error if a pointer field is not aligned to a 4-byte boundary. It may also be an error,
depending on the hardware, if 16-bit fields are not aligned to 2-byte boundaries or
32-bit fields are not aligned to 4-byte boundaries.
Some processors can successfully load 32-bit quantities that are not aligned to
4 bytes, but the unaligned load is much slower than an aligned load. So, the idea
of compiler check for the alignment of 16-bit and 32-bit quantities is to protect
against this case where certain loads are "silently" much slower because they
are accessing unaligned fields.
## Alternate proposals
The above proposal contains a coherent set of proposed changes that address the
main issue [#599](https://github.com/golang/go/issues/599),
while also including some functionality (packed structs, aligned
types) that are useful for other purposes as well.
However, there are a number of alternatives, both to the set of features in the
proposal, and also in the details of individual items in the proposal.
The above proposal has quite a number of
individual items, each of which adds complexity and may have unforeseen issues.
One alternative is to reduce the scope of the proposal, by removing
`//go:align`. With this alternative, we would propose a separate rule for the
alignment of a packed struct. Instead of having a default alignment of 1, a
packed struct would have as its alignment the max alignment of all the types
that make up its individual fields. That is, a packed struct would automatically
have the same overall alignment as the equivalent unpacked struct. With this
definition, we don't need to include `//go:align` in this proposal.
Another alternative (which actually increases the scope of the proposal) would
be to allow `//go:align` to apply not just to type declarations, but also to
field declarations within a packed struct. This would allow explicit alignment of
fields within a packed struct, which would make it easier for developers to get field
alignment correct without using padding. However, we probably do not want to
encourage broad use of `//go:align`, and this ability of `//go:align` to set the
alignment of fields might become greatly overused.
## Alternate syntax
There is also an alternative design that has the same set of features, but expresses the
alignment of types differently. Instead of using `//go:align`, this alternative
follows the proposal in issue [#19057](https://github.com/golang/go/issues/19057) and
expresses alignment via new runtime types
included in structs. In this proposal, there are runtime types
`runtime.Aligned8`, `runtime.Aligned16`, etc. If, for example, a field with
type `runtime.Aligned16` is included in a struct type definition, then that
struct type will have an alignment of 16 bytes, as in:
```go
type vector struct {
_ runtime.Aligned16
vals [16]byte
}
```
It is possible that using `runtime.AlignedN` could directly apply to the following field in
the struct as well. Hence, `runtime.AlignedN` could appear multiple times in a struct in order
to set the alignment of various fields, as well as affecting the overall alignment of the struct.
Similarly, the packed attribute of a struct is expressed by including a field with type
`runtime.Packed`. These fields are zero-length and can either have a name or not. If they
have a name, it is possible to take a pointer to them. It would be an error to use these types
in any situation other than as a type for a field in a struct.
There are a number of positives and negatives to this proposal, as compared to the use
of `//go:packed` and `//go:aligned`, as listed below.
Advantages of special field types / disadvantages of directives:
1. Most importantly, using `runtime.AlignedN` and `runtime.Packed` types in a struct makes it
obvious that these constructs affect the type of the containing struct. The inclusion of these
extra fields means that Go doesn't require any special added constraints for type
equivalence, assignability, or convertibility. The use of directives `//go:packed` and
`//go:aligned` don’t make it as obvious that they actually change the following type. This
may cause more changes in other Go tools, since they must be changed to notice these
directives and realize their effect on the following type. There is no current `//go:` directive
that affects the following type. (`//go:notinheap` relates to the following type, but does not
change the declared type, and is only available in the runtime package.)
2. `runtime.AlignedN` could just be a zero-width type with alignment N that also affects the
alignment of the following field. This is easy to describe and understand, and provides a
natural way to control both field and struct alignment. [`runtime.AlignedN` may or may not
disable field re-ordering -- to be determined.]
3. If `runtime.AlignedN` applies to the following field, users can easily control padding and
alignment within a struct. This is particularly useful in conjunction with `runtime.Packed`, as it
provides a mechanism to add back desired field alignment where packing removed it. It
potentially seems much more unusual to have a directive `//go:align` be specified inside a
struct declaration and applying specifically to the next field.
4. Some folks prefer to not add any more pragma-like `//go:` comments in the language.
Advantages of directives / disadvantages of special field types:
1. We have established `//go:` as the prefix for these kinds of build/compiler directives, and it's
unfortunate to add a second one in the type system instead.
2. With `runtime.AlignedN`, a simple non-struct type (such as `[16]byte`) can only be aligned by
embedding it in a struct, whereas `//go:align` can apply directly to a non-struct type. It doesn't
seem very Go-like to force people to create structs like the 'vector' type when plain types like
`[16]byte` will do.
3. `runtime.Packed` and `runtime.AlignedN` both appear to apply to the following field in the
struct. In the case of runtime.Packed, this doesn’t make any sense -- `runtime.Packed`
applies to the whole struct only, not to any particular field.
4. Adding an alignment/packing field forces the use of key:value literals, which is annoying
and non-orthogonal. Directives have no effect on literals, so unkeyed literals would continue
to work.
5. With `runtime.Packed`, there is a hard break at some Go version, where you can't write a
single struct that works for both older and newer Go versions. That will necessitate separate
files and build tags during any conversion. With the comments you can write one piece of
code that has the same meaning to both older and newer versions of Go (because the explicit
padding is old-version compatible and the old version ignores the `//go:packed` comment).
## Compatibility
We are not changing the alignment of arguments, return variables, or local
variables. Since we would be changing the default layout of structs, we could
affect some programs running on 32-bit systems that depend on the layout of
structs. However, the layout of structs is not explicitly defined in the Go
language spec, except for the minimum alignment, and we are maintaining the
previous minimum alignments. So, we don't believe this change breaks the Go 1
compatibility promise. If assembly code is accessing struct fields, it should be
using the symbolic constants (giving the offset of each field in a struct) that
are available in `go_asm.h`. `go_asm.h` is automatically generated and available for
each package that contains an assembler file, or can be explicitly generated for
use elsewhere via `go tool compile -asmhdr go_asm.h`.
## Implementation
We have a developed some prototype code that changes the default alignment of
64-bit fields on 32-bit systems from 4 bytes to 8 bytes. Since it does not
include an implementation of `//go:packed`, it does not yet try to deal with the
compatibility issues associated with syscall structs `Stat_t` and `Flock_t` or
cgo-generated structs in a complete way. The change is
[CL 210637](https://go-review.googlesource.com/c/go/+/210637). Comments on the design
or implementation are very welcome.
## Open Issues
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go13linker.md | # Go 1.3 Linker Overhaul
Russ Cox \
November 2013 \
golang.org/s/go13linker
## Abstract
The linker is one of the slowest parts of building and running a typical Go program. To address this, we plan to split the linker into two pieces. Perhaps one can be written in Go.
[**Update, 2023.** This plan was originally published as a Google document. For easier access, it was converted to Markdown in this repository in 2023. Later work overhauled the linker a second time, greatly improving its structure, efficiency, and code quality. This document has only minor historical value now.]
## Background
The linker has always been the slowest part of the Plan 9 toolchain, and it is now the slowest part of the Go toolchain. Ken Thompson’s [overview of the toolchain](http://plan9.bell-labs.com/sys/doc/compiler.html) concludes:
> The new compilers compile quickly, load slowly, and produce medium quality object code. The compilers are relatively portable, requiring but a couple of weeks’ work to produce a compiler for a different computer. For Plan 9, where we needed several compilers with specialized features and our own object formats, this project was indispensable. It is also necessary for us to be able to freely distribute our compilers with the Plan 9 distribution.
>
> Two problems have come up in retrospect. The first has to do with the division of labor between compiler and loader. Plan 9 runs on multi-processors and as such compilations are often done in parallel. Unfortunately, all compilations must be complete before loading can begin. The load is then single-threaded. With this model, any shift of work from compile to load results in a significant increase in real time. The same is true of libraries that are compiled infrequently and loaded often. In the future, we may try to put some of the loader work back into the compiler.
That document was written in the early 1990s. The future is here.
## Proposed Plan
The current linker performs two separable tasks. First, it translates an input stream of pseudo-instructions into executable code and data blocks, along with a list of relocations. Second, it deletes dead code, merges what’s left into a single image, resolves relocations, and generates a few whole-program data structures such as the [runtime symbol table](http://golang.org/s/go12symtab).
The first part can be factored out into a library - liblink - that can be linked into the assemblers and compilers. The object files written by 6a, 6c, or 6g and so on would be written by liblink and then contain executable code and data blocks and relocations, the result of the first half of the current linker.
The second part can be handled by what’s left of the linker after extracting liblink. That remaining program which would read the new object files and complete the link. That linker is a small amount of code, the bulk of it architecture-independent. It is possible that it could be merged into a single architecture-independent program invoked as “go tool ld”. It is even possible that it could be rewritten in Go, making it easy to parallelize large links. (See the section below for how to bootstrap.)
To start, we will focus on getting the new split working with C code. The exploration of using Go will happen only once the rest of the change is done.
To avoid churn in the usage of the tools, the generated object files will keep the existing suffixes .5, .6, .8. Perhaps in Go 1.3 we will even include shim programs named 5l, 6l, and 8l that invoke the new linker. These shim programs would be retired in Go 1.4.
## Object Files
The new split requires a new object file format. The current objects contain pseudo-instruction streams, but the new objects will contain executable code and data blocks along with relocations.
A natural question is whether we should adopt an existing object file format, such as ELF. At first, we will use a custom format. A Go-specific linker is required to build runtime data structures like the symbol table, so even if we used ELF object files we could not reuse a standard ELF linker. ELF files are also considerably more general and ELF semantics considerably more complex than the Go-specific linker needs. A custom, less general object file format should be simpler to generate and simpler to consume. On the other hand, ELF can be processed by standard tools like readelf, objdump, and so on. Once the dust has settled, though, and we know exactly what we need from the format, it is worth looking at whether the use of ELF makes sense.
The details of the new object file are not yet worked out. The rest of this section lists some design considerations.
- Obviously the files should be as simple as possible. With few exceptions, anything that can be done in the library half of the linker should be. Possible surprises include the stack split code being done in the library half, which makes object files OS-specific, although they already are due to OS-specific Go code in packages, and the software floating point work being done in the library half, making ARM object files GOARM-specific (today nothing GOARM-specific is done until the linker runs).
- We should make sure that object files are usable via mmap. This would reduce copying during I/O. It may require changing the Go runtime to simply panic, not crash, on SIGSEGV on non-nil addresses.
- Pure Go packages consist of a single object file generated by invoking the Go compiler once on the complete set of Go source files. That object file is then wrapped in an archive. We should arrange that a single object file is also a valid archive file, so that in that common case there is no wrapping step needed.
## Bootstrapping
If the new Go linker is written in Go, there is a bootstrapping problem: how do you link the linker? There are two approaches.
The first approach is to maintain a bootstrap list of CLs. The first CL in the sequence would have the current linker, written in C. Each subsequent step would be a CL containing a new linker that can be linked using the previous linker. The final binaries resulting from the sequence can be made available for download. The sequence need not be too long and could be made to coincide with milestones. For example, we could arrange that the Go 1.3 linker can be compiled as a Go 1.2 program, the Go 1.4 linker can be compiled as a Go 1.3 program, and so on. The recorded sequence makes it possible to re-bootstrap if needed but also provides a way to defend against the [Trusting Trust problem](http://cm.bell-labs.com/who/ken/trust.html). Another way to bootstrap would be to compile gccgo and use it to build the Go 1.3 linker.
The second approach is to keep the C linker even after we have a better one written in Go, and to keep both mostly feature-equivalent. The version written in C only needs to keep enough features to link the one written in Go. It needs to pick up some object files, merge them, and write out an executable. There’s no need for cgo support, no need for external linking, no need for shared libraries, no need for performance. It should be a relatively modest amount of code (perhaps just a few thousand lines) and should not need to change very often. The C version would be built and used during make.bash but not installed. This approach is easier for other developers building Go from source.
It doesn’t matter much which approach we take, just that there is at least one viable approach. We can decide once things are further along.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go2draft-error-values-overview.md | # Error Values — Problem Overview
Russ Cox\
August 27, 2018
## Introduction
This overview and the accompanying
detailed draft designs
are part of a collection of [Go 2 draft design documents](go2draft.md).
The overall goal of the Go 2 effort is to address
the most significant ways that Go fails to scale
to large code bases and large developer efforts.
One way that Go programs fail to scale well is in the capability of typical errors.
A variety of popular helper packages add functionality
beyond the standard error interface,
but they do so in incompatible ways.
As part of Go 2, we are considering whether to standardize any
"optional interfaces" for errors,
to allow helper packages to interoperate
and ideally to reduce the need for them.
As part of Go 2, we are also considering,
as a separate concern,
more convenient [syntax for error checks and handling](go2draft-error-handling-overview.md).
## Problem
Large programs must be able to test for
and react to errors programmatically and also report them well.
Because an error value is any value implementing the [`error` interface](https://golang.org/ref/spec#Errors),
there are four ways that Go programs conventionally test for specific errors.
First, programs can test for equality with sentinel errors like `io.EOF`.
Second, programs can check for an error implementation type using a [type assertion](https://golang.org/ref/spec#Type_assertions) or [type switch](https://golang.org/ref/spec#Type_switches).
Third, ad-hoc checks like
[`os.IsNotExist`](https://golang.org/pkg/os/#IsNotExist)
check for a specific kind of error,
doing limited unwrapping.
Fourth, because neither of these approaches works in general when the error has been wrapped in additional context,
programs often do substring searches in the error text reported by `err.Error()`.
Obviously this last approach is the least desirable,
and it would be better to support the first three checks even in the presence of arbitrary wrapping.
The most common kind of wrapping is use of fmt.Errorf, as in
if err != nil {
return fmt.Errorf("write users database: %v", err)
}
Wrapping in an error type is more work but more useful for programmatic tests, like in:
if err != nil {
return &WriteError{Database: "users", Err: err}
}
Either way, if the original `err` is a known sentinel or known error implementation type,
then wrapping, whether by `fmt.Errorf` or a new type like `WriteError`,
breaks both equality checks and type assertions looking for the original error.
This discourages wrapping, leading to less useful errors.
In a complex program, the most useful description of an error
would include information about all the different operations leading to the error.
For example, suppose that the error writing to the database
earlier was due to an RPC call.
Its implementation called `net.Dial` of `"myserver"`,
which in turn read `/etc/resolv.conf`, which maybe today was accidentally unreadable.
The resulting error’s `Error` method might return this string (split onto two lines for this document):
write users database: call myserver.Method: \
dial myserver:3333: open /etc/resolv.conf: permission denied
The implementation of this error is five different levels (four wrappings):
1. A `WriteError`, which provides `"write users database: "` and wraps
2. an `RPCError`, which provides `"call myserver.Method: "` and wraps
3. a `net.OpError`, which provides `"dial myserver:3333: "` and wraps
4. an `os.PathError`, which provides `"open /etc/resolv.conf: "` and wraps
5. `syscall.EPERM`, which provides `"permission denied"`
There are many questions you might want to ask programmatically of err,
including:
(i) is it an RPCError?
(ii) is it a net.OpError?
(iii) does it satisfy the net.Error interface?
(iv) is it an os.PathError?
(v) is it a permission error?
The first problem is that it is too hard to ask these kinds of questions.
The functions [`os.IsExist`](https://golang.org/pkg/os/#IsExist),
[`os.IsNotExist`](https://golang.org/pkg/os/#IsNotExist),
[`os.IsPermission`](https://golang.org/pkg/os/#IsPermission),
and
[`os.IsTimeout`](https://golang.org/pkg/os/#IsTimeout)
are symptomatic of the problem.
They lack generality in two different ways:
first, each function tests for only one specific kind of error,
and second, each understands only a very limited number of wrapping types.
In particular, these functions understand a few wrapping errors,
notably [`os.PathError`](https://golang.org/pkg/os/#PathError), but
not custom implementations like our hypothetical `WriteError`.
The second problem is less critical but still important:
the reporting of deeply nested errors is too difficult to read
and leaves no room for additional detail,
like relevant file positions in the program.
Popular helper packages exist to address these problems,
but they disagree on the solutions and in general do not interoperate.
## Goals
There are two goals, corresponding to the two main problems.
First, we want to make error inspection by programs easier
and less error-prone, to improve the error handling and
robustness of real programs.
Second, we want to make it possible to print errors
with additional detail, in a standard form.
Any solutions must keep existing code working
and fit with existing source trees.
In particular, the concepts of comparing for equality with error sentinels
like `io.ErrUnexpectedEOF` and testing for errors of a particular type must be preserved.
Existing error sentinels must continue to be supported,
and existing code will not change to return different error types.
That said, it would be okay to expand functions like
[`os.IsPermission`](https://golang.org/pkg/os/#IsPermission) to understand arbitrary wrappings instead of a fixed set.
When considering solutions for printing additional error detail,
we prefer solutions that make it possible—or at least avoid making it impossible—to
localize and translate errors using
[golang.org/x/text/message](https://godoc.org/golang.org/x/text/message).
Packages must continue to be able to define their own error types easily.
It would be unacceptable to define a new, generalized "one true error implementation"
and require all code to use that implementation.
It would be equally unacceptable to add so many additional requirements
on error implementations that only a few packages would bother.
Errors must also remain efficient to create.
Errors are not exceptional.
It is common for errors to be generated, handled, and discarded,
over and over again, as a program executes.
As a cautionary tale, years ago at Google a program written
in an exception-based language was found to be spending
all its time generating exceptions.
It turned out that a function on a deeply-nested stack was
attempting to open each of a fixed list of file paths,
to find a configuration file.
Each failed open operation threw an exception;
the generation of that exception spent a lot of time recording
the very deep execution stack;
and then the caller discarded all that work and continued around its loop.
The generation of an error in Go code must remain a fixed cost,
regardless of stack depth or other context.
(In a panic, deferred handlers run _before_ stack unwinding
for the same reason: so that handlers that do care about
the stack context can inspect the live stack,
without an expensive snapshot operation.)
## Draft Design
The two main problems—error inspection and error formatting—are
addressed by different draft designs.
The constraints of keeping interoperation with existing code
and allowing packages to continue to define their own error types
point strongly in the direction of defining
optional interfaces that an error implementation can satisfy.
Each of the two draft designs adds one such interface.
### Error inspection
For error inspection, the draft design follows the lead of existing packages
like [github.com/pkg/errors](https://github.com/pkg/errors)
and defines an optional interface for an error to return the next error
in the chain of error wrappings:
package errors
type Wrapper interface {
Unwrap() error
}
For example, our hypothetical `WriteError` above would need to implement:
func (e *WriteError) Unwrap() error { return e.Err }
Using this method, the draft design adds two new functions to package errors:
// Is reports whether err or any of the errors in its chain is equal to target.
func Is(err, target error) bool
// As checks whether err or any of the errors in its chain is a value of type E.
// If so, it returns the discovered value of type E, with ok set to true.
// If not, it returns the zero value of type E, with ok set to false.
func As(type E)(err error) (e E, ok bool)
Note that the second function has a type parameter, using the
[contracts draft design](go2draft-generics-overview.md).
Both functions would be implemented as a loop first testing
`err`, then `err.Unwrap()`, and so on, to the end of the chain.
Existing checks would be rewritten as needed to be "wrapping-aware":
errors.Is(err, io.ErrUnexpectedEOF) // was err == io.ErrUnexpectedEOF
pe, ok := errors.As(*os.PathError)(err) // was pe, ok := err.(*os.PathError)
For details, see the [error inspection draft design](go2draft-error-inspection.md).
### Error formatting
For error formatting, the draft design defines an optional interface implemented by errors:
package errors
type Formatter interface {
Format(p Printer) (next error)
}
The argument to `Format` is a `Printer`, provided by the package formatting the error
(usually [`fmt`](https://golang.org/pkg/fmt), but possibly a localization package like
[`golang.org/x/text/message`](https://godoc.org/golang.org/x/text/message) instead).
The `Printer` provides methods `Print` and `Printf`, which emit output,
and `Detail`, which reports whether extra detail should be printed.
The `fmt` package would be adjusted to format errors printed using `%+v` in a multiline format,
with additional detail.
For example, our database `WriteError` might implement the new `Format` method and the old `Error` method as:
func (e *WriteError) Format(p errors.Printer) (next error) {
p.Printf("write %s database", e.Database)
if p.Detail() {
p.Printf("more detail here")
}
return e.Err
}
func (e *WriteError) Error() string { return fmt.Sprint(e) }
And then printing the original database error using `%+v` would look like:
write users database:
more detail here
--- call myserver.Method:
--- dial myserver:3333:
--- open /etc/resolv.conf:
--- permission denied
The errors package might also provide a convenient implementation
for recording the line number of the code creating the error and printing it back when `p.Detail` returns true.
If all the wrappings involved included that line number information, the `%+v` output would look like:
write users database:
more detail here
/path/to/database.go:111
--- call myserver.Method:
/path/to/grpc.go:222
--- dial myserver:3333:
/path/to/net/dial.go:333
--- open /etc/resolv.conf:
/path/to/os/open.go:444
--- permission denied
For details, see the [error printing draft design](go2draft-error-printing.md).
## Discussion and Open Questions
These draft designs are meant only as a starting point for community discussion.
We fully expect the details to be revised based on feedback and especially experience reports.
This section outlines some of the questions that remain to be answered.
**fmt.Errorf**.
If `fmt.Errorf` is invoked with a format ending in `": %v"` or `": %s"`
and with a final argument implementing the error interface,
then `fmt.Errorf` could return a special implementation
that implements both `Wrapper` and `Formatter`.
Should it? We think definitely yes to `Formatter`.
Perhaps also yes to `Wrapper`, or perhaps we should
introduce `fmt.WrapErrorf`.
Adapting `fmt.Errorf` would make nearly all existing code
using `fmt.Errorf` play nicely with `errors.Is`, `errors.As`,
and multiline error formatting.
Not adapting `fmt.Errorf` would instead require adding
some other API that did play nicely,
for use when only textual context needs to be added.
**Source lines**.
Many error implementations will want to record source lines
to be printed as part of error detail.
We should probably provide some kind of embedding helper
in [package `errors`](https://golang.org/pkg/errors)
and then also use that helper in `fmt.Errorf`.
Another question is whether `fmt.Errorf` should by default
record the file and line number of its caller, for display in the
detailed error format.
Microbenchmarks suggest that recording the caller’s file and line
number for printing in detailed displays would roughly double the
cost of `fmt.Errorf`, from about 250ns to about 500ns.
**Is versus Last**.
Instead of defining `errors.Is`,
we could define a function `errors.Last`
that returns the final error in the chain.
Then code would write `errors.Last(err) == io.ErrUnexpectedEOF`
instead of `errors.Is(err, io.ErrUnexpectedEOF)`
The draft design avoids this approach for a few reasons.
First, `errors.Is` seems a clearer statement of intent.
Second, the higher-level `errors.Is` leaves room for future adaptation,
instead of being locked into the single equality check.
Even today, the draft design’s implementation tests for equality with each error in the chain,
which would allow testing for a sentinel value
that was itself a wrapper (presumably of another sentinel)
as opposed to only testing the end of the error chain.
A possible future expansion might be to allow individual error implementations
to define their own optional `Is(error) bool` methods
and have `errors.Is` prefer that method over the default equality check.
In contrast, using the lower-level idiom
`errors.Last(err) == io.ErrUnexpectedEOF`
eliminates all these possibilities.
Providing `errors.Last(err)` might also encourage type checks
against the result, instead of using `errors.As`.
Those type checks would of course not test against
any of the wrapper types, producing a different result and
introducing confusion.
**Unwrap**. It is unclear if `Unwrap` is the right name for the method
returning the next error in the error chain.
Dave Cheney’s [`github.com/pkg/errors`](https://golang.org/pkg/errors) has popularized `Cause` for the method name,
but it also uses `Cause` for the function that returns the last error in the chain.
At least a few people we talked to
did not at first understand the subtle semantic difference between method and function.
An early draft of our design used `Next`, but all our explanations
referred to wrapping and unwrapping,
so we changed the method name to match.
**Feedback**. The most useful general feedback would be
examples of interesting uses that are enabled or disallowed
by the draft design.
We’d also welcome feedback about the points above,
especially based on experience
with complex or buggy error inspection or printing in real programs.
We are collecting links to feedback at
[golang.org/wiki/Go2ErrorValuesFeedback](https://golang.org/wiki/Go2ErrorValuesFeedback).
## Other Go Designs
### Prehistoric Go
The original representation of an error in Go was `*os.Error`, a pointer to this struct:
// Error is a structure wrapping a string describing an error.
// Errors are singleton structures, created by NewError, so their addresses can
// be compared to test for equality. A nil Error pointer means ``no error''.
// Use the String() method to get the contents; it handles the nil case.
// The Error type is intended for use by any package that wishes to define
// error strings.
type Error struct {
s string
}
func NewError(s string) *Error
In April 2009, we changed `os.Error` to be an interface:
// An Error can represent any printable error condition.
type Error interface {
String() string
}
This was the definition of errors in the initial public release,
and programmers learned to use equality tests and type checks to inspect them.
In November 2011, as part of the lead-up to Go 1,
and in response to feedback from Roger Peppe and others in the Go community,
we lifted the interface out of the standard library and into [the language itself](https://golang.org/ref/spec#Errors),
producing the now-ubiquitous error interface:
type error interface {
Error() string
}
The names changed but the basic operations remained the name: equality tests and type checks.
### github.com/spacemonkeygo/errors
[github.com/spacemonkeygo/errors](https://godoc.org/github.com/spacemonkeygo/errors) (July 2013)
was written to support
[porting a large Python codebase to Go](https://medium.com/space-monkey-engineering/go-space-monkey-5f43744bffaa).
It provides error class hierarchies, automatic logging and stack traces, and arbitrary associated key-value pairs.
For error inspection,
it can test whether an error belongs to a particular class, optionally considering wrapped errors,
and considering entire hierarchies.
The `Error` type’s `Error` method returns a string giving the error class name,
message, stack trace if present, and other data.
There is also a `Message` method that returns just the message,
but there is no support for custom formatting.
### github.com/juju/errgo
[github.com/juju/errgo](https://github.com/juju/errgo) (February 2014)
was written to support Juju, a large Go program developed at Canonical.
When you wrap an error, you can choose whether to
adopt the cause of an underlying error or hide it.
Either way the underlying error is available to printing routines.
The package’s `Cause` helper function returns an error intended for the program to act upon,
but it only unwraps one layer,
in contrast to
[`github.com/pkg/errors`](https://godoc.org/github.com/pkg/errors)'s `Cause`
function, which returns the final error in the chain.
The custom error implementation’s `Error` method concatenates the messages of the errors
along the wrapping chain.
There is also a `Details` function that returns a JSON-like string
with both messages and location information.
### gopkg.in/errgo.v1 and gopkg.in/errgo.v2
[`gopkg.in/errgo.v1`](https://godoc.org/gopkg.in/errgo.v1) (July 2014)
is a slight variation of `github.com/juju/errgo`.
[`gopkg.in/errgo.v2`](https://godoc.org/gopkg.in/errgo.v2)
has the same concepts but a simpler API.
### github.com/hashicorp/errwrap
[`github.com/hashicorp/errwrap`](https://godoc.org/github.com/hashicorp/errwrap) (October 2014)
allows wrapping more than one error, resulting in a general tree of errors.
It has a general `Walk` method that invokes a function on every error in the tree,
as well as convenience functions for matching by type and message string.
It provides no special support for displaying error details.
### github.com/pkg/errors
[`github.com/pkg/errors`](https://godoc.org/github.com/pkg/errors) (December 2015)
provides error wrapping and stack trace capture.
It introduced `%+v` to format errors with additional detail.
The package assumes that only the last error of the chain is of interest,
so it provides a helper `errors.Cause` to retrieve that last error.
It does not provide any functions that consider the entire chain when looking for a match.
### upspin.io/error
[`upspin.io/errors`](https://godoc.org/upspin.io/errors)
is an error package customized for [Upspin](https://upspin.io),
documented in Rob Pike and Andrew Gerrand’s December 2017 blog post
“[Error handling in Upspin](https://commandcenter.blogspot.com/2017/12/error-handling-in-upspin.html).”
This package is a good reminder of the impact that a custom errors package
can have on a project, and that it must remain easy to implement bespoke
error implementations.
It introduced the idea of `errors.Is`, although the one in the draft design
differs in detail from Upspin’s.
We considered for a while whether it was possible to adopt
something like Upspin’s `errors.Match`, perhaps even to generalize
both the draft design’s `errors.Is` and `errors.As` into a single
primitive `errors.Match`.
In the end we could not.
## Designs in Other Languages
Most languages do not allow entirely user-defined error implementations.
Instead they define an exception base class that users extend;
the base class provides a common place to hang functionality
and would enable answering these questions in quite a different way.
But of course Go has no inheritance or base classes.
Rust is similar to Go in that it defines an error as anything
implementing a particular interface.
In Rust, that interface is three methods:
`Display` `fmt`, to print the display form of the error;
`Debug` `fmt`, to print the debug form of the error,
typically a dump of the data structure itself,
and `cause`, which returns the “lower-level cause of this error,”
analogous to `Unwrap`.
Rust does not appear to provide analogues to the draft design’s
`errors.Is` or `errors.As`, or any other helpers that walk the
error cause chain.
Of course, in Rust, the `cause` method is required, not optional.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/4238-go12nil.md | # Go 1.2 Field Selectors and Nil Checks
Author: Russ Cox
Last updated: July 2013
Discussion at https://go.dev/issue/4238.
Originally at https://go.dev/s/go12nil.
Implemented in Go 1.2 release.
## Abstract
For Go 1.2, we need to define that, if `x` is a pointer to a struct
type and `x == nil`, `&x.Field` causes a runtime panic rather than
silently producing an unusable pointer.
## Background
Today, if you have:
```Go
package main
type T struct {
Field1 int32
Field2 int32
}
type T2 struct {
X [1<<24]byte
Field int32
}
func main() {
var x *T
p1 := &x.Field1
p2 := &x.Field2
var x2 *T2
p3 := &x2.Field
}
```
then:
* `p1 == nil`; dereferencing it causes a panic
* `p2 != nil` (it has pointer value 4); but dereferencing it still
causes a panic
* p3 is not computed: `&x2.Field` panics to avoid producing a pointer
that might point into mapped memory.
The spec does not define what should happen when `&x.Field` is evaluated
for `x == nil`.
The answer probably should not depend on `Field`’s offset within the
struct.
The current behavior is at best merely historical accident; it was
definitely not thought through or discussed.
Those three behaviors are three possible definitions.
The behavior for `p2` is clearly undesirable, since it creates
unusable pointers that cannot be detected as unusable.
hat leaves `p1` (`&x.Field` is `nil` if `x` is `nil`) and `p3`
(`&x.Field` panics if `x` is `nil`).
An analogous form of the question concerns `&x[i]` where `x` is a
`nil` pointer to an array.
he current behaviors match those of the struct exactly, depending in
the same way on both the offset of the field and the overall size of
the array.
A related question is how `&*x` should evaluate when `x` is `nil`.
In C, `&*x == x` even when `x` is `nil`.
The spec again is silent.
The gc compilers go out of their way to implement the C rule (it
seemed like a good idea at a time).
A simplified version of a recent example is:
```Go
type T struct {
f int64
sync.Mutex
}
var x *T
x.Lock()
```
The method call turns into `(&x.Mutex).Lock()`, which today is passed
a receiver with pointer value `8` and panics inside the method,
accessing a `sync.Mutex` field.
## Proposed Definition
If `x` is a `nil` pointer to a struct, then evaluating `&x.Field`
always panics.
If `x` is a `nil` pointer to an array, then evaluating `&x[i]` panics
or `x[i:j]` panics.
If `x` is a `nil` pointer, then evaluating `&*x` panics.
In general, the result of an evaluation of `&expr` either panics or
returns a non-nil pointer.
## Rationale
The alternative, defining `&x.Field == nil` when `x` is `nil`, delays
the error check.
That feels more like something that belongs in a dynamically typed
language like Python or JavaScript than in Go.
Put another way, it pushes the panic farther away from the problem.
We have not seen a compelling use case for allowing `&x.Field == nil`.
Panicking during `&x.Field` is no more expensive (perhaps less) than
defining `&x.Field == nil`.
It is difficult to justify allowing `&*x` but not `&x.Field`.
They are different expressions of the same computation.
The guarantee that `&expr`—when it evaluates successfully—is always a
non-nil pointer makes intuitive sense and avoids a surprise: how can
you take the address of something and get `nil`?
## Implementation
The addressable expressions are: “a variable, pointer indirection, or
slice indexing operation; or a field selector of an addressable struct
operand; or an array indexing operation of an addressable array.”
The address of a variable can never be `nil`; the address of a slice
indexing operation is already checked because a `nil` slice will have
`0` length, so any index is invalid.
That leaves pointer indirections, field selector of struct, and index
of array, confirming at least that we’re considering the complete set
of cases.
Assuming `x` is in register AX, the current x86 implementation of case
`p3` is to read from the memory `x` points at:
```
TEST 0(AX), AX
```
That causes a fault when `x` is nil.
Unfortunately, it also causes a read from the memory location `x`,
even if the actual field being addressed is later in memory.
This can cause unnecessary cache conflicts if different goroutines own
different sections of a large array and one is writing to the first
entry.
(It is tempting to use a conditional move instruction:
```
TEST AX, AX
CMOVZ 0, AX
```
Unfortunately, the definition of the conditional move is that the load
is unconditional and only the assignment is conditional, so the fault
at address `0` would happen always.)
An alternate implementation would be to test `x` itself and use a
conditional jump:
```
TEST AX, AX
JNZ ok (branch hint: likely)
MOV $0, 0
ok:
```
This is more code (something like 7 bytes instead of 3) but may run
more efficiently, as it avoids spurious memory references and will be
predicted easily.
(Note that defining `&x.Field == nil` would require at least that much
code, if not a little more, except when the offset is `0`.)
It will probably be important to have a basic flow analysis for
variables, so that the compiler can avoid re-testing the same pointer
over and over in a given function.
I started on that general topic a year ago and got a prototype working
but then put it aside (the goal then was index bounds check
elimination).
It could be adapted easily for nil check elimination.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/19308-number-literals.md | # Proposal: Go 2 Number Literal Changes
Russ Cox\
Robert Griesemer
Last updated: March 6, 2019
[golang.org/design/19308-number-literals](https://golang.org/design/19308-number-literals)
Discussion at:
- [golang.org/issue/19308](https://golang.org/issue/19308) (binary integer literals)
- [golang.org/issue/12711](https://golang.org/issue/12711) (octal integer literals)
- [golang.org/issue/28493](https://golang.org/issue/28493) (digit separator)
- [golang.org/issue/29008](https://golang.org/issue/29008) (hexadecimal floating point)
## Abstract
We propose four related changes to number literals in Go:
1. Add binary integer literals, as in 0b101.
2. Add alternate octal integer literals, as in 0o377.
3. Add hexadecimal floating-point literals, as in 0x1p-1021.
4. Allow _ as a digit separator in number literals.
## Background
Go adopted C’s number literal syntax and in so doing
joined a large group of widely-used languages
that all broadly agree about how numbers are written.
The group of such “C-numbered languages” includes at least
C, C++, C#, Java, JavaScript, Perl, PHP, Python, Ruby, Rust, and Swift.
In the decade since Go’s initial design,
nearly all the C-numbered languages have extended
their number literals to add one or more of the four changes in this proposal.
Extending Go in the same way makes it easier for developers
to move between these languages, eliminating an unnecessary rough edge
without adding significant complexity to the language.
### Binary Integer Literals
The idea of writing a program’s integer literals in binary is quite old,
dating back at least to
[PL/I (1964)](http://www.bitsavers.org/pdf/ibm/npl/320-0908_NPL_Technical_Report_Dec64.pdf), which used `'01111000'B`.
In C’s lineage,
[CPL (1966)](http://www.ancientgeek.org.uk/CPL/CPL_Elementary_Programming_Manual.pdf)
supported decimal, binary, and octal integers.
Binary and octal were introduced by an underlined 2 or 8 prefix.
[BCPL (1967)](http://web.eah-jena.de/~kleine/history/languages/Richards-BCPL-ReferenceManual.pdf) removed binary but retained octal,
still introduced by an 8 (it’s unclear whether the 8 was underlined or followed by a space).
[B (1972)](https://www.bell-labs.com/usr/dmr/www/kbman.html)
introduced the leading zero syntax for octal, as in `0377`.
[C as of 1974](http://cm.bell-labs.co/who/dmr/cman74.pdf) had only decimal and octal.
Hexadecimal `0x12ab` had been added by the time
[K&R (1978)](http://www.ccapitalia.net/descarga/docs/1978-ritchie-the-c-programming-language.pdf)
was published.
Possibly the earliest use of the exact `0b01111000` syntax was in
[Caml Light 0.5 (1992)](https://discuss.ocaml.org/t/the-origin-of-the-0b-01-notation/3180/2),
which was written in C and borrowed `0x12ab` for hexadecimal.
Binary integer literals using the `0b01111000` syntax were added in
[C++14 (2014)](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3472.pdf),
[C# 7.0 (2017)](https://blogs.msdn.microsoft.com/dotnet/2017/03/09/new-features-in-c-7-0/),
[Java 7 (2011)](https://docs.oracle.com/javase/7/docs/technotes/guides/language/binary-literals.html),
[JavaScript ES6 (2015)](http://www.ecma-international.org/ecma-262/6.0/#sec-literals-numeric-literals),
[Perl 5.005\_55 (1998)](https://perl5.git.perl.org/perl.git/commitdiff/4f19785bce4da39a768aa6210f1f97ab4c0600dd),
[PHP 5.4.0 (2012)](http://php.net/manual/en/language.types.integer.php),
[Python 2.6 (2008)](https://docs.python.org/2.7/whatsnew/2.6.html#pep-3127-integer-literal-support-and-syntax),
[Ruby 1.4.0 (1999)](https://github.com/ruby/ruby/blob/v1_4_0/ChangeLog#L647),
[Rust 0.1 or earlier (2012)](https://github.com/rust-lang/rust/blob/release-0.1/doc/rust.md#integer-literals),
and
[Swift 1.0 or earlier (2014)](https://carlosicaza.com/swiftbooks/SwiftLanguage.pdf).
The syntax is a leading `0b` prefix followed by some number of 0s and 1s.
There is no corresponding character escape sequence
(that is, no `'\b01111000'` for `'x'`, since `'\b'` is already used for backspace, U+0008).
Most languages also updated their integer parsing and formatting routines to support binary forms as well.
Although C++14 added binary integer literals, C itself has not, [as of C18](http://www.open-std.org/jtc1/sc22/wg14/www/abq/c17_updated_proposed_fdis.pdf).
### Octal Integer Literals
As noted earlier, octal was the
most widely-used form for writing bit patterns
in the early days of computing
(after binary itself).
Even though octal today is far less common,
B’s introduction of `0377` as syntax for octal carried forward into
C, C++, Go, Java, JavaScript, Python, Perl, PHP, and Ruby.
But because programmers don't see octal much,
it sometimes comes as a surprise that
`01234` is not 1234 decimal or that `08` is a syntax error.
[Caml Light 0.5 (1992)](https://discuss.ocaml.org/t/the-origin-of-the-0b-01-notation/3180/2),
mentioned above
as possibly the earliest language with `0b01111000` for binary,
may also have been the first to use the analogous notation `0o377` for octal.
[JavaScript ES3 (1999)](https://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262,%203rd%20edition,%20December%201999.pdf)
technically removed support for `0377` as octal,
but of course allowed implementations to continue recognizing them.
[ES5 (2009)](https://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262%205th%20edition%20December%202009.pdf)
added “strict mode,” in which, among other restrictions, octal literals are disallowed entirely
(`0377` is an error, not decimal).
[ES6 (2015)](https://www.ecma-international.org/ecma-262/6.0/index.html#sec-literals-numeric-literals)
introduced the `0o377` syntax, allowed even in strict mode.
[Python’s initial release (1991)](https://www.python.org/download/releases/early/)
used `0377` syntax for octal.
[Python 3 (2008)](https://docs.python.org/3.0/reference/lexical_analysis.html#integer-and-long-integer-literals)
changed the syntax to `0o377`,
removing the `0377` syntax (`0377` is an error, not decimal).
[Python 2.7 (2010)](https://docs.python.org/2.7/reference/lexical_analysis.html#integer-and-long-integer-literals)
backported `0o377` as an alternate octal syntax (`0377` is still supported).
[Rust (2012)](https://github.com/rust-lang/rust/blob/release-0.1/doc/rust.md#integer-literals)
initially had no octal syntax but added `0o377` in
[Rust 0.9 (2014)](https://github.com/rust-lang/rust/blob/0.9/doc/rust.md#integer-literals).
[Swift’s initial release (2014)](https://carlosicaza.com/swiftbooks/SwiftLanguage.pdf) used `0o377` for octal.
Both Rust and Swift allow decimals to have leading zeros (`0377` is decimal 377),
creating a potential point of confusion for programmers coming from
other C-numbered languages.
### Hexadecimal Floating-Point
The exact decimal floating-point literal syntax of C and its successors (`1.23e4`)
appears to have originated at IBM in
[Fortran (1956)](https://archive.computerhistory.org/resources/text/Fortran/102649787.05.01.acc.pdf),
some time after the
[1954 draft](https://archive.computerhistory.org/resources/text/Fortran/102679231.05.01.acc.pdf).
The syntax was not used in
[Algol 60 (1960)](http://web.eah-jena.de/~kleine/history/languages/Algol60-Naur.pdf)
but was adopted by [PL/I (1964)](http://www.bitsavers.org/pdf/ibm/npl/320-0908_NPL_Technical_Report_Dec64.pdf)
and
[Algol 68 (1968)](http://web.eah-jena.de/~kleine/history/languages/Algol68-Report.pdf),
and it spread from those into many other languages.
Hexadecimal floating-point literals appear to have originated in
[C99 (1999)](http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf),
spreading to
[C++17 (2017)](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0245r0.html),
[Java 5 (2004)](http://psc.informatik.uni-jena.de/languages/Java/javaspec-3.pdf)
[Perl 5.22 (2015)](https://perldoc.perl.org/perl5220delta.html#Floating-point-parsing-has-been-improved),
and
[Swift's initial release (2014)](https://carlosicaza.com/swiftbooks/SwiftLanguage.pdf).
[IEEE 754-2008](http://www.dsc.ufcg.edu.br/~cnum/modulos/Modulo2/IEEE754_2008.pdf)
also added hexadecimal floating-point literals, citing C99.
All these languages use the syntax `0x123.fffp5`,
where the “`pN`” specifies a decimal number interpreted as a power of two:
`0x123.fffp5` is (0x123 + 0xfff/0x1000) x 2^5.
In all languages, the exponent is required: `0x123.fff` is not a valid hexadecimal floating-point literal.
The fraction may be omitted, as in `0x1p-1000`.
C, C++, Java, Perl, and the IEEE 754-2008 standard
allow omitting the digits before or after the hexadecimal point:
`0x1.p0` and `0x.fp0` are valid hexadecimal floating-point literals
just as `1.` and `.9` are valid decimal literals.
Swift requires digits on both sides of a decimal or hexadecimal point;
that is, in Swift, `0x1.p0`, `0x.fp0`, `1.`, and `.9` are all invalid.
Adding hexadecimal floating-point literals also requires adding library support.
C99 added the `%a` and `%A` `printf` formats for formatting and `%a` for scanning.
It also redefined `strtod` to accept hexadecimal floating-point values.
The other languages made similar changes.
C# (as of C# 7.3, which has [no published language specification](https://github.com/dotnet/csharplang/issues/64)),
JavaScript (as of [ES8](https://www.ecma-international.org/ecma-262/8.0/index.html#sec-literals-numeric-literals)),
PHP (as of [PHP 7.3.0](http://php.net/manual/en/language.types.float.php)),
Python (as of [Python 3.7.2](https://docs.python.org/3/reference/lexical_analysis.html#floating-point-literals)),
Ruby (as of [Ruby 2.6.0](https://docs.ruby-lang.org/en/2.6.0/syntax/literals_rdoc.html#label-Numbers)),
and
Rust (as of [Rust 1.31.1](https://doc.rust-lang.org/stable/reference/tokens.html#floating-point-literals))
do not support hexadecimal floating-point literals.
### Digit Separators
Allowing the use of an underscore to separate digits in a number literal into groups dates back at least to
[Ada 83](http://archive.adaic.com/standards/83rat/html/ratl-02-01.html#2.1), possibly earlier.
A digit-separating underscore was added to
[C# 7.0 (2017)](https://blogs.msdn.microsoft.com/dotnet/2017/03/09/new-features-in-c-7-0/),
[Java 7 (2011)](https://docs.oracle.com/javase/7/docs/technotes/guides/language/underscores-literals.html),
[Perl 2.0 (1988)](https://perl5.git.perl.org/perl.git/blob/378cc40b38293ffc7298c6a7ed3cd740ad79be52:/toke.c#l1021),
[Python 3.6 (2016)](https://www.python.org/dev/peps/pep-0515/),
[Ruby 1.0 or earlier (1998)](https://github.com/ruby/ruby/blob/v1_0/parse.y#L2282),
[Rust 0.1 or earlier (2012)](https://github.com/rust-lang/rust/blob/release-0.1/doc/rust.md#integer-literals),
and
[Swift 1.0 or earlier (2014)](https://carlosicaza.com/swiftbooks/SwiftLanguage.pdf).
C has not yet added digit separators as of C18.
C++14 uses
[single-quote as a digit separator](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3781.pdf)
to avoid an ambiguity with C++11 user-defined integer suffixes
that might begin with underscore.
JavaScript is
[considering adding underscore as a digit separator](https://github.com/tc39/proposal-numeric-separator)
but ran into a similar problem with user-defined suffixes.
PHP [considered but decided against](https://wiki.php.net/rfc/number_format_separator) adding digit separators.
The design space for a digit separator feature reduces to four questions:
(1) whether to accept a separator immediately after the single-digit octal `0` base prefix, as in `0_1`;
(2) whether to accept a separator immediately after non-digit base prefixes like `0b`, `0o`, and `0x`, as in `0x_1`;
(3) whether to accept multiple separators in a row, as in `1__2`; and
(4) whether to accept trailing separators, as in `1_`.
(Note that a “leading separator” would create a variable name, as in _1.)
These four questions produce sixteen possible approaches.
Case 0b0001:
If the name “digit separator” is understood literally,
so that each underscore must separate (appear between) digits,
then the answers should be that `0_1` is allowed but `0x_1`, `1__2`, and `1_` are all disallowed.
This is the approach taken by
[Ada 83](http://archive.adaic.com/standards/83lrm/html/lrm-02-04.html#2.4)
(using `8#123#` for octal and so avoiding question 1),
[C++14](http://eel.is/c++draft/lex.icon),
[Java 7](https://docs.oracle.com/javase/7/docs/technotes/guides/language/underscores-literals.html),
and
[Swift](https://docs.swift.org/swift-book/ReferenceManual/LexicalStructure.html#ID415)
(using only `0o` for octal and thereby also avoiding question 1).
Case 0b0011:
If we harmonize the treatment of the `0` octal base prefix
with the `0b`, `0o`, and `0x` base prefixes by allowing a digit separator
between a base prefix and leading digit,
then the answers are that `0_1` and `0x_1` are allowed but `1__2` and `1_` are disallowed.
This is the approach taken in
[Python 3.6](https://www.python.org/dev/peps/pep-0515/#literal-grammar) and
[Ruby 1.8.0](https://github.com/ruby/ruby/blob/v1_8_0/parse.y#L3723).
Case 0b0111:
If we allow runs of multiple separators as well, that allows `0_1`, `0x_1`,
and `1__2`, but not `1_`.
This is the approach taken in
[C# 7.2](https://github.com/dotnet/csharplang/blob/master/proposals/csharp-7.2/leading-separator.md)
and
[Ruby 1.6.2](https://github.com/ruby/ruby/blob/v1_6_2/parse.y#L2779).
Case 0b1111:
If we then also accept trailing digit separators,
the implementation becomes trivial: ignore digit separators wherever they appear.
[Perl](https://perl5.git.perl.org/perl.git/blob/378cc40b38293ffc7298c6a7ed3cd740ad79be52:/toke.c#l1021)
takes this approach,
as does [Rust](https://swift.godbolt.org/z/1f72LH).
Other combinations have been tried:
[C# 7.0](https://github.com/dotnet/csharplang/blob/master/proposals/csharp-7.0/digit-separators.md)
used 0b0101 (`0x_1` and `1_` disallowed)
before moving to case 0b1110 in
[C# 7.2](https://github.com/dotnet/csharplang/blob/master/proposals/csharp-7.2/leading-separator.md).
[Ruby 1.0](https://github.com/ruby/ruby/blob/v1_0/parse.y#L2282)
used 0b1110 (only `0_1` disallowed)
and
[Ruby 1.3.1](https://github.com/ruby/ruby/blob/v1_3_1_/parse.y#L2779)
used 0b1101 (only `0x_1` disallowed),
before Ruby 1.6.2 tried 0b0111 and Ruby 1.8.0 settled on 0b0011.
A similar question arises for whether to allow underscore between
a decimal point and a decimal digit in a floating-point number,
or between the literal `e` and the exponent.
We won’t enumerate the cases here, but again languages
make surprising choices.
For example, in Rust, `1_.2` is valid but `1._2` is not.
## Proposal
We propose to add binary integer literals,
to add octal `0o377` as an alternate octal literal syntax,
to add hexadecimal floating-point literals,
and to add underscore as a base-prefix-or-digit separator
(case 0b0011 above; see rationale below),
along with appropriate library support.
Finally, to fit the existing imaginary literals seemlessly
into the new number literals, we propose that the imaginary
suffix `i` may be used on any (non-imaginary) number literal.
### Language Changes
The definitions in https://golang.org/ref/spec#Letters_and_digits add:
> binary_digit = "0" | "1" .
The https://golang.org/ref/spec#Integer_literals section would be amended to read:
> An integer literal is a sequence of digits representing an integer constant.
> An optional prefix sets a non-decimal base:
> 0, 0o, or 0O for octal, 0b or 0B for binary, 0x or 0X for hexadecimal.
> A single 0 is considered a decimal zero.
> In hexadecimal literals, letters a-f and A-F represent values 10 through 15.
> For readability, an underscore may appear after a base prefix or
> between successive digits; such underscores do not change the literal value.
>
> int_lit = decimal_lit | binary_lit | octal_lit | hex_lit .
> decimal_lit = "0" | ( "1" … "9" ) [ [ "_" ] decimal_digits ] .
> binary_lit = "0" ( "b" | "B" ) [ "_" ] binary_digits .
> octal_lit = "0" [ "o" | "O" ] [ "_" ] octal_digits .
> hex_lit = "0" ( "x" | "X" ) [ "_" ] hex_digits .
>
> decimal_digits = decimal_digit { [ "_" ] decimal_digit } .
> binary_digits = binary_digit { [ "_" ] binary_digit } .
> octal_digits = octal_digit { [ "_" ] octal_digit } .
> hex_digits = hex_digit { [ "_" ] hex_digit } .
>
> 42
> 4_2
> 0600
> 0_600
> 0o600
> 0O600 // second character is capital letter 'O'
> 0xBadFace
> 0xBad_Face
> 0x_67_7a_2f_cc_40_c6
> 170141183460469231731687303715884105727
> 170_141183_460469_231731_687303_715884_105727
>
> _42 // an identifier, not an integer literal
> 42_ // invalid: _ must separate successive digits
> 4__2 // invalid: only one _ at a time
> 0_xBadFace // invalid: _ must separate successive digits
The https://golang.org/ref/spec#Floating-point_literals section would be amended to read:
> A floating-point literal is a decimal or hexadecimal representation
> of a floating-point constant.
> A decimal floating-point literal consists of
> an integer part (decimal digits),
> a decimal point,
> a fractional part (decimal digits)
> and an exponent part (e or E followed by an optional sign and decimal digits).
> One of the integer part or the fractional part may be elided;
> one of the decimal point or the exponent part may be elided.
> A hexadecimal floating-point literal consists of
> a 0x or 0X prefix,
> an integer part (hexadecimal digits),
> a decimal point,
> a fractional part (hexadecimal digits),
> and an exponent part (p or P followed by an optional sign and decimal digits).
> One of the integer part or the fractional part may be elided;
> the decimal point may be elided as well, but the exponent part is required.
> (This syntax matches the one given in
> [IEEE 754-2008](https://doi.org/10.1109/IEEESTD.2008.4610935) §5.12.3.)
> For readability, an underscore may appear after a base prefix or
> between successive digits; such underscores do not change the literal value.
>
>
> float_lit = decimal_float_lit | hex_float_lit .
>
> decimal_float_lit = decimal_digits "." [ decimal_digits ] [ decimal_exponent ] |
> decimal_digits decimal_exponent |
> "." decimal_digits [ decimal_exponent ] .
> decimal_exponent = ( "e" | "E" ) [ "+" | "-" ] decimal_digits .
>
> hex_float_lit = "0" ( "x" | "X" ) hex_mantissa hex_exponent .
> hex_mantissa = [ "_" ] hex_digits "." [ hex_digits ] |
> [ "_" ] hex_digits |
> "." hex_digits .
> hex_exponent = ( "p" | "P" ) [ "+" | "-" ] decimal_digits .
>
>
> 0.
> 72.40
> 072.40 // == 72.40
> 2.71828
> 1.e+0
> 6.67428e-11
> 1E6
> .25
> .12345E+5
> 1_5. // == 15.0
> 0.15e+0_2 // == 15.0
>
> 0x1p-2 // == 0.25
> 0x2.p10 // == 2048.0
> 0x1.Fp+0 // == 1.9375
> 0X.8p-0 // == 0.5
> 0X_1FFFP-16 // == 0.1249847412109375
> 0x15e-2 // == 0x15e - 2 (integer subtraction)
>
> 0x.p1 // invalid: mantissa has no digits
> 1p-2 // invalid: p exponent requires hexadecimal mantissa
> 0x1.5e-2 // invalid: hexadecimal mantissa requires p exponent
> 1_.5 // invalid: _ must separate successive digits
> 1._5 // invalid: _ must separate successive digits
> 1.5_e1 // invalid: _ must separate successive digits
> 1.5e_1 // invalid: _ must separate successive digits
> 1.5e1_ // invalid: _ must separate successive digits
The syntax in https://golang.org/ref/spec#Imaginary_literals section would be amended to read:
> An imaginary literal represents the imaginary part of a complex constant.
> It consists of an integer or floating-point literal followed by the lower-case
> letter i.
> The value of an imaginary literal is the value of the respective
> integer or floating-point literal multiplied by the imaginary unit i.
>
> imaginary_lit = (decimal_digits | int_lit | float_lit) "i" .
>
> For backward-compatibility, an imaginary literal's integer part consisting
> entirely of decimal digits (and possibly underscores) is considered a decimal
> integer, not octal, even if it starts with a leading 0.
>
> 0i
> 0123i // == 123i for backward-compatibility
> 0o123i // == 0o123 * 1i == 83i
> 0xabci // == 0xabc * 1i == 2748i
> 0.i
> 2.71828i
> 1.e+0i
> 6.67428e-11i
> 1E6i
> .25i
> .12345E+5i
> 0x1p-2i // == 0x1p-2 * 1i == 0.25i
### Library Changes
In [`fmt`](https://golang.org/pkg/fmt/),
[`Printf`](https://golang.org/pkg/fmt/#Printf) with `%#b`
will format an integer argument in binary with a leading `0b` prefix.
Today, [`%b` already formats an integer in binary](https://play.golang.org/p/3MPBPo2sZu9)
with no prefix;
[`%#b` does the same](https://play.golang.org/p/wwPshrf3oae)
but is rejected by `go` `vet`, including during `go` `test`,
so redefining `%#b` will not break vetted, tested programs.
`Printf` with `%#o` is already defined to format an
integer argument in octal with a leading `0` (not `0o`) prefix,
and all the other available format flags have defined effects too.
It appears no change is possible here.
Clients can use `0o%o`, at least for non-negative arguments.
`Printf` with `%x`
will format a floating-point argument in hexadecimal floating-point syntax.
(Today, `%x` on a floating-point argument formats as a `%!x` error
and also provokes a vet error.)
[`Scanf`](https://golang.org/pkg/fmt/#Scanf) will accept
both decimal and hexadecimal floating-point forms
where it currently accepts decimal.
In [`go/scanner`](https://golang.org/pkg/go/scanner/),
the implementation must change to understand the
new syntax, but the public API needs no changes.
Because [`text/scanner`](https://golang.org/pkg/text/scanner/)
recognizes Go’s number syntax as well,
it will be updated to add the new numbers too.
In [`math/big`](https://golang.org/pkg/math/big/),
[`Int.SetString`](https://golang.org/pkg/math/big/#Int.SetString)
with `base` set to zero accepts binary integer literals already;
it will change to recognize the new octal prefix and the underscore digit separator.
[`ParseFloat`](https://golang.org/pkg/math/big/#ParseParse) and
[`Float.Parse`](https://golang.org/pkg/math/big/#Float.Parse) with `base` set to zero,
[`Float.SetString`](https://golang.org/pkg/math/big/#Float.SetString),
and [`Rat.SetString`](https://golang.org/pkg/math/big/#Rat.SetString) each
accept binary integer literals and hexadecimal floating-point literals already;
they will change to recognize the new octal prefix and the underscore digit separator.
Calls using non-zero bases will continue to reject inputs with underscores.
In [`strconv`](https://golang.org/pkg/strconv/),
[`ParseInt`](https://golang.org/pkg/strconv/#ParseInt)
and
[`ParseUint`](https://golang.org/pkg/strconv/#ParseUint)
will change behavior.
When the `base` argument is zero,
they will recognize binary literals like `0b0111`
and also allow underscore as a digit separator.
Calls using non-zero bases will continue to reject inputs with underscores.
[`ParseFloat`](https://golang.org/pkg/strconv/#ParseFloat)
will change to accept hexadecimal floating-point literals and
the underscore digit separator.
[`FormatFloat`](https://golang.org/pkg/strconv/#FormatFloat)
will add a new format `x` to generate hexadecimal floating-point.
In [`text/template/parse`](https://golang.org/pkg/text/template/parse),
`(*lex).scanNumber` will need to recognize the three new syntaxes.
This will provide the new literals to both
[`html/template`](https://golang.org/pkg/html/template/)
and
[`text/template`](https://golang.org/pkg/html/template/).
### Tool Changes
Gofmt will understand the new syntax once
[`go/scanner`](https://golang.org/pkg/go/scanner/)
is updated.
For legibility,
gofmt will also rewrite capitalized base prefixes `0B`, `0O`, and `0X`
and exponent prefixes `E` and `P`
to their lowercase equivalents `0b`, `0o`, `0x`, `e`, and `p`.
This is especially important for `0O377` vs `0o377`.
To avoid introducing incompatibilities into
otherwise backward-compatible code,
gofmt will not rewrite `0377` to `0o377`.
(Perhaps in a few years we will be able to consider doing that.)
## Rationale
As discussed in the background section,
the choices being made in this proposal
match those already made in Go's broader language family.
Making these same changes to Go is useful on its own
and avoids unnecessary lexical differences with the
other languages.
This is the primary rationale for all four changes.
### Octal Literals
We considered using `0o377` in the initial design of Go,
but we decided that even if Go used `0o377`
for octal, it would have to reject `0377` as invalid syntax
(that is, Go could not accept `0377` as decimal 377),
to avoid an unpleasant surprise for programmers coming
from C, C++, Java, Python 2, Perl, PHP, Ruby, and so on.
Given that `0377` cannot be decimal,
it seemed at the time unnecessary
and gratuitously different to avoid it for octal.
It still seemed that way in 2015, when the issue
was raised as [golang.org/issue/12711](https://golang.org/issue/12711).
Today, however, it seems clear that there is agreement
among at least the newer C-numbered languages
for `0o377` as octal (either alone or in addition to `0377`).
Harmonizing Go’s octal integer syntax with these languages
makes sense for the same reasons as harmonizing
the binary integer and hexadecimal floating-point syntax.
For backwards compatibility,
we must keep the existing `0377` syntax in Go 1,
so Go will have two octal integer syntaxes,
like Python 2.7 and non-strict JavaScript.
As noted earlier,
after a few years, once there are no supported Go releases
missing the `0o377` syntax,
we could consider changing
`gofmt` to at least reformat `0377` to `0o377` for clarity.
### Arbitrary Bases
Another obvious change is to consider
arbitrary-radix numbers, like Algol 68’s `2r101`.
Perhaps the form most in keeping with Go’s history
would be to allow `BxDIGITS` where `B` is the base,
as in `2x0101`, `8x377`, and `16x12ab`,
where `0x` becomes an alias for `16x`.
We considered this in the initial design of Go,
but it seemed gratuitously
different from the common C-numbered languages,
and it would still not let us interpret `0377` as decimal.
It also seemed that very few programs would be
aided by being able to write numbers in, say,
base 3 or base 36.
That logic still holds today,
reinforced by the weight of existing Go usage.
Better to add only the syntaxes that other languages use.
For discussion, see [golang.org/issue/28256](https://golang.org/issue/28256).
### Library Changes
In the library changes, the various number parsers
are changed to accept underscores only in the base-detecting case.
For example:
strconv.ParseInt("12_34", 0, 0) // decimal with underscores
strconv.ParseInt("0b11_00", 0, 0) // binary with underscores
strconv.ParseInt("012_34", 0, 0) // 01234 (octal)
strconv.ParseInt("0o12_34", 0, 0) // 0o1234 (octal)
strconv.ParseInt("0x12_34", 0, 0) // 0x1234 (hexadecimal)
strconv.ParseInt("12_34", 10, 0) // error: fixed base cannot use underscores
strconv.ParseInt("11_00", 2, 0) // error: fixed base cannot use underscores
strconv.ParseInt("12_34", 8, 0) // error: fixed base cannot use underscores
strconv.ParseInt("12_34", 16, 0) // error: fixed base cannot use underscores
Note that the fixed-base case also rejects base prefixes (and always has):
strconv.ParseInt("0b1100", 2, 0) // error: fixed base cannot use base prefix
strconv.ParseInt("0o1100", 8, 0) // error: fixed base cannot use base prefix
strconv.ParseInt("0x1234", 16, 0) // error: fixed base cannot use base prefix
The rationale for rejecting underscores when the base is known
is the same as the rationale for rejecting base prefixes:
the caller is likely to be parsing a substring of a larger
input and would not appreciate the “flexibility.”
For example, parsing hex bytes two digits at a time
might use `strconv.ParseInt(input[i:i+2], 16, 8)`,
and parsers for various text formats
use `strconv.ParseInt(field, 10, 64)`
to parse a plain decimal number.
These use cases should not be required to guard
against underscores in the inputs themselves.
On the other hand,
uses of `strconv.ParseInt` and `strconv.ParseUint` with `base` argument zero
already accept decimal, octal `0377`, and hexadecimal literals,
so they will start accepting the new binary and octal literals
and digit-separating underscores.
For example, command line flags defined with `flag.Int` will start
accepting these inputs.
Similarly, uses of `strconv.ParseFloat`, like `flag.Float64`
or the conversion of string-typed database entries to `float64`
in [`database/sql`](https://golang.org/pkg/database/sql/),
will start accepting hexadecimal floating-point literals
and digit-separating underscores.
### Digit Separators
The main bike shed to paint is the detail about
where exactly digit separators are allowed.
Following discussion on [golang.org/issue/19308](https://golang.org/issue/19308),
and matching the latest versions of Python and Ruby,
this proposal adopts the rule
that each digit separator must separate
a digit from the base prefix or another digit:
`0_1`, `0x_1`, and `1_2` are all allowed, while `1__2` and `1_` are not.
## Compatibility
The syntaxes being introduced here were all previously invalid,
either syntactically or semantically.
For an example of the latter,
`0x1.fffp-2` parses in current versions of Go
as the value `0x1`’s `fffp` field minus two.
Of course, integers have no fields, so while this program
is syntactically valid, it is still semantically invalid.
The changes to numeric parsing functions like
`strconv.ParseInt` and `strconv.ParseFloat`
mean that programs that might have failed before
on inputs like `0x1.fffp-2` or `1_2_3` will now succeed.
Some users may be surprised.
Part of the rationale with limiting the changes
to calls using `base` zero is to limit the potential surprise
to those cases that already accepted multiple syntaxes.
## Implementation
The implementation requires:
- Language specification changes, detailed above.
- Library changes, detailed above.
- Compiler changes, in gofrontend and cmd/compile/internal/syntax.
- Testing of compiler changes, library changes, and gofmt.
Robert Griesemer and Russ Cox plan to split the work
and aim to have all the changes ready at the start of the Go 1.13 cycle,
around February 1.
As noted in our blog post
[“Go 2, here we come!”](https://blog.golang.org/go2-here-we-come),
the development cycle will serve as a way to collect experience about
these new features and feedback from (very) early adopters.
At the release freeze, May 1, we will revisit the proposed features
and decide whether to include them in Go 1.13.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go2draft-contracts.md | # Contracts — Draft Design
Ian Lance Taylor\
Robert Griesemer\
July 31, 2019
## Superseded
We will not be pursuing the approach outlined in this design draft.
It has been replaced by a [new
proposal](https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md).
This document exists for historical context.
## Abstract
We suggest extending the Go language to add optional type parameters
to types and functions.
Type parameters may be constrained by contracts: they may be used as
ordinary types that only support the operations permitted by the
contracts.
Type inference via a unification algorithm is supported to permit
omitting type arguments from function calls in many cases.
Depending on a detail, the design can be fully backward compatible
with Go 1.
## Background
This version of the design draft is similar to the one presented on
August 27, 2018, except that the syntax of contracts is completely
different.
There have been many [requests to add additional support for generic
programming](https://github.com/golang/go/wiki/ExperienceReports#generics)
in Go.
There has been extensive discussion on
[the issue tracker](https://golang.org/issue/15292) and on
[a living document](https://docs.google.com/document/d/1vrAy9gMpMoS3uaVphB32uVXX4pi-HnNjkMEgyAHX4N4/view).
There have been several proposals for adding type parameters, which
can be found through the links above.
Many of the ideas presented here have appeared before.
The main new features described here are the syntax and the careful
examination of contracts.
This design draft suggests extending the Go language to add a form of
parametric polymorphism, where the type parameters are bounded not by
a subtyping relationship but by explicitly defined structural
constraints.
Among other languages that support parameteric polymorphism this
design is perhaps most similar to Ada, although the syntax is
completely different.
This design does not support template metaprogramming or any other
form of compile time programming.
As the term _generic_ is widely used in the Go community, we will
use it below as a shorthand to mean a function or type that takes type
parameters.
Don't confuse the term generic as used in this design with the same
term in other languages like C++, C#, Java, or Rust; they have
similarities but are not the same.
## Design
We will describe the complete design in stages based on examples.
### Type parameters
Generic code is code that is written using types that will be
specified later.
Each unspecified type is called a _type parameter_.
When running the generic code, the type parameter will be set to a
_type argument_.
Here is a function that prints out each element of a slice, where the
element type of the slice, here called `T`, is unknown.
This is a trivial example of the kind of function we want to permit in
order to support generic programming.
```Go
// Print prints the elements of a slice.
// It should be possible to call this with any slice value.
func Print(s []T) { // Just an example, not the suggested syntax.
for _, v := range s {
fmt.Println(v)
}
}
```
With this approach, the first decision to make is: how should the type
parameter `T` be declared?
In a language like Go, we expect every identifier to be declared in
some way.
Here we make a design decision: type parameters are similar to
ordinary non-type function parameters, and as such should be listed
along with other parameters.
However, type parameters are not the same as non-type parameters, so
although they appear in the list of parameters we want to distinguish
them.
That leads to our next design decision: we define an additional,
optional, parameter list, describing type parameters.
This parameter list appears before the regular parameters.
It starts with the keyword `type`, and lists type parameters.
```Go
func Print(type T)(s []T) {
// same as above
}
```
This says that within the function `Print` the identifier `T` is a
type parameter, a type that is currently unknown but that will be
known when the function is called.
Since `Print` has a type parameter, when we call it we must pass a
type argument.
Type arguments are passed much like type parameters are declared: as a
separate list of arguments.
At the call site, the `type` keyword is not required.
```Go
Print(int)([]int{1, 2, 3})
```
### Type contracts
Let's make our example slightly more complicated.
Let's turn it into a function that converts a slice of any type into a
`[]string` by calling a `String` method on each element.
```Go
// This function is INVALID.
func Stringify(type T)(s []T) (ret []string) {
for _, v := range s {
ret = append(ret, v.String()) // INVALID
}
return ret
}
```
This might seem OK at first glance, but in this example, `v` has type
`T`, and we don't know anything about `T`.
In particular, we don't know that `T` has a `String` method.
So the call to `v.String()` is invalid.
Naturally, the same issue arises in other languages that support
generic programming.
In C++, for example, a generic function (in C++ terms, a function
template) can call any method on a value of generic type.
That is, in the C++ approach, calling `v.String()` is fine.
If the function is called with a type that does not have a `String`
method, the error is reported at the point of the function call.
These errors can be lengthy, as there may be several layers of generic
function calls before the error occurs, all of which must be reported
for complete clarity.
The C++ approach would be a poor choice for Go.
One reason is the style of the language.
In Go we don't refer to names, such as, in this case, `String`, and
hope that they exist.
Go resolves all names to their declarations when they are seen.
Another reason is that Go is designed to support programming at
scale.
We must consider the case in which the generic function definition
(`Stringify`, above) and the call to the generic function (not shown,
but perhaps in some other package) are far apart.
In general, all generic code implies a contract that type arguments
need to implement.
In this case, the contract is pretty obvious: the type has to have a
`String() string` method.
In other cases it may be much less obvious.
We don't want to derive the contract from whatever `Stringify` happens
to do.
If we did, a minor change to `Stringify` might change the contract.
That would mean that a minor change could cause code far away, that
calls the function, to unexpectedly break.
It's fine for `Stringify` to deliberately change its contract, and
force users to change.
What we want to avoid is `Stringify` changing its contract
accidentally.
This is an important rule that we believe should apply to any attempt
to define generic programming in Go: there should be an explicit
contract between the generic code and calling code.
### Contract introduction
In this design, a contract describes the requirements of a set of
types.
We'll discuss contracts further later, but for now we'll just say that
one of the things that a contract can do is specify that a type
argument must implement a particular method.
For the `Stringify` example, we need to write a contract that says
that the single type parameter has a `String` method that takes no
arguments and returns a value of type `string`.
We write that like this:
```Go
contract stringer(T) {
T String() string
}
```
A contract is introduced with a new keyword `contract`, followed by a
name and a list of identifiers.
The identifiers name the types that the contract will specify.
Specifying a required method looks like defining a method in an
interface type, except that the receiver type must be explicitly
provided.
### Using a contract to verify type arguments
A contract serves two purposes.
First, contracts are used to validate a set of type arguments.
As shown above, when a function with type parameters is called, it
will be called with a set of type arguments.
When the compiler sees the function call, it will use the contract to
validate the type arguments.
If the type arguments don't satisfy the requirements specified by the
contract, the compiler will report a type error: the call is using
types that the function's contract does not permit.
The `stringer` contract seen earlier requires that the type argument
used for `T` has a `String` method that takes no arguments and
returns a value of type `string`.
### The party of the second part
A contract is not used only at the call site.
It is also used to describe what the function using the contract, the
function with type parameters, is permitted to do with those type
parameters.
In a function with type parameters that does not use a contract, such
as the `Print` example shown earlier, the function is only permitted
to use those type parameters in ways that any type may be used in Go.
That is, operations like:
* declare variables of those types
* assign other values of the same type to those variables
* pass those variables to functions or return them from functions
* take the address of those variables
* define and use other types that use those types, such as a slice of
that type
If the function wants to take any more specific action with the type
parameter, or a value of the type parameter, the contract must
explicitly support that action.
In the `stringer` example seen earlier, the contract provides the
ability to call a method `String` on a value of the type parameter.
That is, naturally, exactly the operation that the `Stringify`
function needs.
### Using a contract
We've seen how the `stringer` contract can be used to verify that a
type argument is suitable for the `Stringify` function, and we've seen
how the contract permits the `Stringify` function to call the `String`
method that it needs.
The final step is showing how the `Stringify` function uses the
`stringer` contract.
This is done by naming the contract at the end of the list of type
parameters.
```Go
func Stringify(type T stringer)(s []T) (ret []string) {
for _, v := range s {
ret = append(ret, v.String()) // now valid
}
return ret
}
```
The list of type parameters (in this case, a list with the single
element `T`) is followed by an optional contract name.
When just the contract name is listed, as above, the contract must
have the same number of parameters as the function has type
parameters; when validating the contract, the type parameters are
passed to the contract in the order in which they appear in the
function signature.
Later we'll discuss passing explicit type parameters to the contract.
### Multiple type parameters
Although the examples we've seen so far use only a single type
parameter, functions may have multiple type parameters.
```Go
func Print2(type T1, T2)(s1 []T1, s2 []T2) { ... }
```
Compare this to
```Go
func Print2Same(type T1)(s1 []T1, s2 []T1) { ... }
```
In `Print2` `s1` and `s2` may be slices of different types.
In `Print2Same` `s1` and `s2` must be slices of the same element
type.
Although functions may have multiple type parameters, they may only
have a single contract.
```Go
contract viaStrings(To, From) {
To Set(string)
From String() string
}
func SetViaStrings(type To, From viaStrings)(s []From) []To {
r := make([]To, len(s))
for i, v := range s {
r[i].Set(v.String())
}
return r
}
```
### Parameterized types
We want more than just generic functions: we also want generic types.
We suggest that types be extended to take type parameters.
```Go
type Vector(type Element) []Element
```
A type's parameters are just like a function's type parameters.
Within the type definition, the type parameters may be used like any
other type.
To use a parameterized type, you must supply type arguments.
This looks like a function call, except that the function in this case
is actually a type.
This is called _instantiation_.
```Go
var v Vector(int)
```
Parameterized types can have methods.
The receiver type of a method must list the type parameters.
They are listed without the `type` keyword or any contract.
```Go
func (v *Vector(Element)) Push(x Element) { *v = append(*v, x) }
```
A parameterized type can refer to itself in cases where a type can
ordinarily refer to itself, but when it does so the type arguments
must be the type parameters, listed in the same order.
This restriction prevents infinite recursion of type instantiation.
```Go
// This is OK.
type List(type Element) struct {
next *List(Element)
val Element
}
// This type is INVALID.
type P(type Element1, Element2) struct {
F *P(Element2, Element1) // INVALID; must be (Element1, Element2)
}
```
(Note: with more understanding of how people want to write code, it
may be possible to relax this rule to permit some cases that use
different type arguments.)
The type parameter of a parameterized type may have contracts.
```Go
type StringableVector(type T stringer) []T
func (s StringableVector(T)) String() string {
var sb strings.Builder
sb.WriteString("[")
for i, v := range s {
if i > 0 {
sb.WriteString(", ")
}
sb.WriteString(v.String())
}
sb.WriteString("]")
return sb.String()
}
```
When a parameterized type is a struct, and the type parameter is
embedded as a field in the struct, the name of the field is the name
of the type parameter, not the name of the type argument.
```Go
type Lockable(type T) struct {
T
mu sync.Mutex
}
func (l *Lockable(T)) Get() T {
l.mu.Lock()
defer l.mu.Unlock()
return l.T
}
```
### Parameterized type aliases
Type aliases may not have parameters.
This restriction exists because it is unclear how to handle a type
alias with type parameters that have a contract.
Type aliases may refer to instantiated types.
```Go
type VectorInt = Vector(int)
```
If a type alias refers to a parameterized type, it must provide type
arguments.
### Methods may not take additional type arguments
Although methods of a parameterized type may use the type's
parameters, methods may not themselves have additional type
parameters.
Where it would be useful to add type arguments to a method, people
will have to write a suitably parameterized top-level function.
This restriction avoids having to specify the details of exactly when
a method with type arguments implements an interface.
(This is a feature that can perhaps be added later if it proves
necessary.)
### Contract embedding
A contract may embed another contract, by listing it in the
contract body with type arguments.
This will look a bit like a method definition in the contract body,
but it will be different because there will be no receiver type.
It is handled as if the embedded contract's body were placed into the
calling contract, with the embedded contract's type parameters
replaced by the embedded type arguments.
This contract embeds the contract `stringer` defined earlier.
```Go
contract PrintStringer(X) {
stringer(X)
X Print()
}
```
This is equivalent to
```Go
contract PrintStringer(X) {
X String() string
X Print()
}
```
### Using types that refer to themselves in contracts
Although this is implied by what has already been discussed, it's
worth pointing out explicitly that a contract may require a method to
have an argument whose type is the same as the method's receiver
type.
```Go
package compare
// The equal contract describes types that have an Equal method with
// an argument of the same type as the receiver type.
contract equal(T) {
T Equal(T) bool
}
// Index returns the index of e in s, or -1.
func Index(type T equal)(s []T, e T) int {
for i, v := range s {
// Both e and v are type T, so it's OK to call e.Equal(v).
if e.Equal(v) {
return i
}
}
return -1
}
```
This function can be used with any type that has an `Equal` method
whose single parameter type is the same as the receiver type.
```Go
import "compare"
type EqualInt int
// The Equal method lets EqualInt satisfy the compare.equal contract.
func (a EqualInt) Equal(b EqualInt) bool { return a == b }
func Index(s []EqualInt, e EqualInt) int {
return compare.Index(EqualInt)(s, e)
}
```
In this example, when we pass `EqualInt` to `compare.Index`, we
check whether `EqualInt` satisfies the contract `compare.equal`.
We replace `T` with `EqualInt` in the declaration of the `Equal`
method in the `equal` contract, and see whether `EqualInt` has a
matching method.
`EqualInt` has a method `Equal` that accepts a parameter of type
`EqualInt`, so all is well, and the compilation succeeds.
### Mutually referencing type parameters
Within a contract, methods may refer to any of the contract's type
parameters.
For example, consider a generic graph package that contains generic
algorithms that work with graphs.
The algorithms use two types, `Node` and `Edge`.
`Node` is expected to have a method `Edges() []Edge`.
`Edge` is expected to have a method `Nodes() (Node, Node)`.
A graph can be represented as a `[]Node`.
This simple representation is enough to implement graph algorithms
like finding the shortest path.
```Go
package graph
contract G(Node, Edge) {
Node Edges() []Edge
Edge Nodes() (from Node, to Node)
}
type Graph(type Node, Edge G) struct { ... }
func New(type Node, Edge G)(nodes []Node) *Graph(Node, Edge) { ... }
func (g *Graph(Node, Edge)) ShortestPath(from, to Node) []Edge { ... }
```
While at first glance this may look like a typical use of interface
types, `Node` and `Edge` are non-interface types with specific
methods.
In order to use `graph.Graph`, the type arguments used for `Node` and
`Edge` have to define methods that follow a certain pattern, but they
don't have to actually use interface types to do so.
For example, consider these type definitions in some other package:
```Go
type Vertex struct { ... }
func (v *Vertex) Edges() []*FromTo { ... }
type FromTo struct { ... }
func (ft *FromTo) Nodes() (*Vertex, *Vertex) { ... }
```
There are no interface types here, but we can instantiate
`graph.Graph` using the type arguments `*Vertex` and `*FromTo`:
```Go
var g = graph.New(*Vertex, *FromTo)([]*Vertex{ ... })
```
`*Vertex` and `*FromTo` are not interface types, but when used
together they define methods that implement the contract `graph.G`.
Note that we couldn't use plain `Vertex` or `FromTo`, since the
required methods are pointer methods, not value methods.
Although `Node` and `Edge` do not have to be instantiated with
interface types, it is also OK to use interface types if you like.
```Go
type NodeInterface interface { Edges() []EdgeInterface }
type EdgeInterface interface { Nodes() (NodeInterface, NodeInterface) }
```
We could instantiate `graph.Graph` with the types `NodeInterface` and
`EdgeInterface`, since they implement the `graph.G` contract.
There isn't much reason to instantiate a type this way, but it is
permitted.
This ability for type parameters to refer to other type parameters
illustrates an important point: it should be a requirement for any
attempt to add generics to Go that it be possible to instantiate
generic code with multiple type arguments that refer to each other in
ways that the compiler can check.
As it is a common observation that contracts share some
characteristics of interface types, it's worth stressing that this
capability is one that contracts provide but interface types do not.
### Passing parameters to a contract
As mentioned earlier, by default the type parameters are passed to the
contract in the order in which they appear in the function signature.
It is also possible to explicitly pass type parameters to a contract
as though they were arguments.
This is useful if the contract and the generic function take type
parameters in a different order, or if only some parameters need a
contract.
In this example the type parameter `E` can be any type, but the type
parameter `M` must implement the `String` method.
The function passes just `M` to the `stringer` contract, leaving `E`
as though it had no constraints.
```Go
func MapAndPrint(type E, M stringer(M))(s []E, f(E) M) []string {
r := make([]string, len(s))
for i, v := range s {
r[i] = f(v).String()
}
return r
}
```
### Contract syntactic details
Contracts may only appear at the top level of a package.
While contracts could be defined to work within the body of a
function, it's hard to think of realistic examples in which they would
be useful.
We see this as similar to the way that methods can not be defined
within the body of a function.
A minor point is that only permitting contracts at the top level
permits the design to be Go 1 compatible.
There are a few ways to handle the syntax:
* We could make `contract` be a keyword only at the start of a
top-level declaration, and otherwise be a normal identifier.
* We could declare that if you use `contract` at the start of a
top-level declaration, then it becomes a keyword for the entire
package.
* We could make `contract` always be a keyword, albeit one that can
only appear in one place, in which case this design is not Go 1
compatible.
Like other top level declarations, a contract is exported if its name
starts with an uppercase letter.
If exported it may be used by functions, types, or contracts in other
packages.
### Values of type parameters are not boxed
In the current implementations of Go, interface values always hold
pointers.
Putting a non-pointer value in an interface variable causes the value
to be _boxed_.
That means that the actual value is stored somewhere else, on the heap
or stack, and the interface value holds a pointer to that location.
In this design, values of generic types are not boxed.
For example, let's consider a function that works for any type `T`
with a `Set(string)` method that initializes the value based on a
string, and uses it to convert a slice of `string` to a slice of `T`.
```Go
package from
contract setter(T) {
T Set(string) error
}
func Strings(type T setter)(s []string) ([]T, error) {
ret := make([]T, len(s))
for i, v := range s {
if err := ret[i].Set(v); err != nil {
return nil, err
}
}
return ret, nil
}
```
Now let's see some code in a different package.
```Go
type Settable int
func (p *Settable) Set(s string) (err error) {
*p, err = strconv.Atoi(s)
return err
}
func F() {
// The type of nums is []Settable.
nums, err := from.Strings(Settable)([]string{"1", "2"})
if err != nil { ... }
// Settable can be converted directly to int.
// This will set first to 1.
first := int(nums[0])
...
}
```
When we call `from.Strings` with the type `Settable` we get back a
`[]Settable` (and an error).
The values in that slice will be `Settable` values, which is to say,
they will be integers.
They will not be boxed as pointers, even though they were created and
set by a generic function.
Similarly, when a parameterized type is instantiated it will have the
expected types as components.
```Go
package pair
type Pair(type carT, cdrT) struct {
f1 carT
f2 cdrT
}
```
When this is instantiated, the fields will not be boxed, and no
unexpected memory allocations will occur.
The type `pair.Pair(int, string)` is convertible to `struct { f1 int;
f2 string }`.
### Function argument type inference
In many cases, when calling a function with type parameters, we can
use type inference to avoid having to explicitly write out the type
arguments.
Go back to the example of a call to our simple `Print` function:
```Go
Print(int)([]int{1, 2, 3})
```
The type argument `int` in the function call can be inferred from the
type of the non-type argument.
This can only be done when all the function's type parameters are used
for the types of the function's (non-type) input parameters.
If there are some type parameters that are used only for the
function's result parameter types, or only in the body of the
function, then it is not possible to infer the type arguments for the
function, since there is no value from which to infer the types.
For example, when calling `from.Strings` as defined earlier, the type
parameters cannot be inferred because the function's type parameter
`T` is not used for an input parameter, only for a result.
When the function's type arguments can be inferred, the language uses
type unification.
On the caller side we have the list of types of the actual (non-type)
arguments, which for the `Print` example here is simply `[]int`.
On the function side is the list of the types of the function's
non-type parameters, which here is `[]T`.
In the lists, we discard respective arguments for which the function
side does not use a type parameter.
We must then unify the remaining argument types.
Type unification is a two pass algorithm.
In the first pass, we ignore untyped constants on the caller side and
their corresponding types in the function definition.
We compare corresponding types in the lists.
Their structure must be identical, except that type parameters on the
function side match the type that appears on the caller side at the
point where the type parameter occurs.
If the same type parameter appears more than once on the function
side, it will match multiple argument types on the caller side.
Those caller types must be identical, or type unification fails, and
we report an error.
After the first pass, we check any untyped constants on the caller
side.
If there are no untyped constants, or if the type parameters in the
corresponding function types have matched other input types, then
type unification is complete.
Otherwise, for the second pass, for any untyped constants whose
corresponding function types are not yet set, we determine the default
type of the untyped constant in [the usual
way](https://golang.org/ref/spec#Constants).
Then we run the type unification algorithm again, this time with no
untyped constants.
In this example
```Go
s1 := []int{1, 2, 3}
Print(s1)
```
we compare `[]int` with `[]T`, match `T` with `int`, and we are done.
The single type parameter `T` is `int`, so we infer that the call
to `Print` is really a call to `Print(int)`.
For a more complex example, consider
```Go
package transform
func Slice(type From, To)(s []From, f func(From) To) []To {
r := make([]To, len(s))
for i, v := range s {
r[i] = f(v)
}
return r
}
```
The two type parameters `From` and `To` are both used for input
parameters, so type inference is possible.
In the call
```Go
strs := transform.Slice([]int{1, 2, 3}, strconv.Itoa)
```
we unify `[]int` with `[]From`, matching `From` with `int`.
We unify the type of `strconv.Itoa`, which is `func(int) string`,
with `func(From) To`, matching `From` with `int` and `To` with
`string`.
`From` is matched twice, both times with `int`.
Unification succeeds, so the call written as `transform.Slice` is a
call of `transform.Slice(int, string)`.
To see the untyped constant rule in effect, consider
```Go
package pair
func New(type T)(f1, f2 T) *Pair(T) { ... }
```
In the call `pair.New(1, 2)` both arguments are untyped constants, so
both are ignored in the first pass.
There is nothing to unify.
We still have two untyped constants after the first pass.
Both are set to their default type, `int`.
The second run of the type unification pass unifies `T` with `int`,
so the final call is `pair.New(int)(1, 2)`.
In the call `pair.New(1, int64(2))` the first argument is an untyped
constant, so we ignore it in the first pass.
We then unify `int64` with `T`.
At this point the type parameter corresponding to the untyped constant
is fully determined, so the final call is `pair.New(int64)(1, int64(2))`.
In the call `pair.New(1, 2.5)` both arguments are untyped constants,
so we move on the second pass.
This time we set the first constant to `int` and the second to
`float64`.
We then try to unify `T` with both `int` and `float64`, so
unification fails, and we report a compilation error.
Note that type inference is done without regard to contracts.
First we use type inference to determine the type arguments to use for
the package, and then, if that succeeds, we check whether those type
arguments implement the contract.
Note that after successful type inference, the compiler must still
check that the arguments can be assigned to the parameters, as for any
function call.
This need not be the case when untyped constants are involved.
(Note: Type inference is a convenience feature.
Although we think it is an important feature, it does not add any
functionality to the design, only convenience in using it.
It would be possible to omit it from the initial implementation, and
see whether it seems to be needed.
That said, this feature doesn't require additional syntax, and is
likely to significantly reduce the stutter of repeated type arguments
in code.)
(Note: We could also consider supporting type inference for
composite literals of parameterized types.
```Go
type Pair(type T) struct { f1, f2 T }
var V = Pair{1, 2} // inferred as Pair(int){1, 2}
```
It's not clear how often this will arise in real code.)
### Instantiating a function
Go normally permits you to refer to a function without passing any
arguments, producing a value of function type.
You may not do this with a function that has type parameters; all type
arguments must be known at compile time.
However, you can instantiate the function, by passing type arguments,
without passing any non-type arguments.
This will produce an ordinary function value with no type parameters.
```Go
// PrintInts will be type func([]int).
var PrintInts = Print(int)
```
### Type assertions and switches
A useful function with type parameters will support any type argument
that implements the contract.
Sometimes, though, it's possible to use a more efficient
function implementation for some type arguments.
The language already has mechanisms for code to find out what type it
is working with: type assertions and type switches.
Those are normally only permitted with interface types.
In this design, functions are also permitted to use them with values
whose types are type parameters, or are based on type parameters.
This doesn't add any functionality, as the function could get the same
information using the reflect package.
It's merely occasionally convenient, and it may result in more
efficient code.
For example, this code is permitted even if it is called with a type
argument that is not an interface type.
```Go
contract reader(T) {
T Read([]byte) (int, error)
}
func ReadByte(type T reader)(r T) (byte, error) {
if br, ok := r.(io.ByteReader); ok {
return br.ReadByte()
}
var b [1]byte
_, err := r.Read(b[:])
return b[0], err
}
```
### Instantiating types in type literals
When instantiating a type at the end of a type literal, there is a
parsing ambiguity.
```Go
x1 := []T(v1)
x2 := []T(v2){}
```
In this example, the first case is a type conversion of `v1` to the
type `[]T`.
The second case is a composite literal of type `[]T(v2)`, where `T` is
a parameterized type that we are instantiating with the type argument
`v2`.
The ambiguity is at the point where we see the open parenthesis: at
that point the parser doesn't know whether it is seeing a type
conversion or something like a composite literal.
To avoid this ambiguity, we require that type instantiations at the
end of a type literal be parenthesized.
To write a type literal that is a slice of a type instantiation, you
must write `[](T(v1))`.
Without those parentheses, `[]T(x)` is parsed as `([]T)(x)`, not as
`[](T(x))`.
This only applies to slice, array, map, chan, and func type literals
ending in a type name.
Of course it is always possible to use a separate type declaration to
give a name to the instantiated type, and to use that.
This is only an issue when the type is instantiated in place.
### Using parameterized types as unnamed function parameter types
When parsing a parameterized type as an unnamed function parameter
type, there is a parsing ambiguity.
```Go
var f func(x(T))
```
In this example we don't know whether the function has a single
unnamed parameter of the parameterized type `x(T)`, or whether this is
a named parameter `x` of the type `(T)` (written with parentheses).
For backward compatibility, we treat this as the latter case: `x(T)`
is a parameter `x` of type `(T)`.
In order to describe a function with a single unnamed parameter of
type `x(T)`, either the parameter must be named, or extra parentheses
must be used.
```Go
var f1 func(_ x(T))
var f2 func((x(T)))
```
### Embedding a parameterized type in a struct
There is a parsing ambiguity when embedding a parameterized type
in a struct type.
```Go
type S1(type T) struct {
f T
}
type S2 struct {
S1(int)
}
```
In this example we don't know whether struct `S2` has a single
field named `S1` of type `(int)`, or whether we
are trying to embed the instantiated type `S1(int)` into `S2`.
For backward compatibility, we treat this as the former case: `S2` has
a field named `S1`.
In order to embed an instantiated type in a struct, we could require that
extra parentheses be used.
```Go
type S2 struct {
(S1(int))
}
```
This is currently not supported by the language, so this would suggest
generally extending the language to permit types embedded in structs to
be parenthesized.
### Embedding a parameterized interface type in an interface
There is a parsing ambiguity when embedding a parameterized interface
type in another interface type.
```Go
type I1(type T) interface {
M(T)
}
type I2 interface {
I1(int)
}
```
In this example we don't know whether interface `I2` has a single
method named `I1` that takes an argument of type `int`, or whether we
are trying to embed the instantiated type `I1(int)` into `I2`.
For backward compatibility, we treat this as the former case: `I2` has
a method named `I1`.
In order to embed an instantiated interface, we could require that
extra parentheses be used.
```Go
type I2 interface {
(I1(int))
}
```
This is currently not supported by the language, so this would suggest
generally extending the language to permit embedded interface types to
be parenthesized.
### Reflection
We do not propose to change the reflect package in any way.
When a type or function is instantiated, all of the type parameters
will become ordinary non-generic types.
The `String` method of a `reflect.Type` value of an instantiated type
will return the name with the type arguments in parentheses.
For example, `List(int)`.
It's impossible for non-generic code to refer to generic code without
instantiating it, so there is no reflection information for
uninstantiated generic types or functions.
### Contracts details
Let's take a deeper look at contracts.
Operations on values whose type is a type parameter must be permitted
by the type parameter's contract.
This means that the power of generic functions is tied precisely to
the interpretation of the contract body.
It also means that the language requires a precise definition of the
operations that are permitted by a given contract.
#### Methods
All the contracts we've seen so far show only method calls in the
contract body.
If a method call appears in the contract body, that method may be
called on a value in any statement or expression in the function
body.
It will take argument and result types as specified in the contract
body.
#### Pointer methods
In some cases we need to require that a method be a pointer method.
This will happen when a function needs to declare variables whose
type is the type parameter, and also needs to call methods that are
defined for the pointer to the type parameter.
For example:
```Go
contract setter(T) {
T Set(string)
}
func Init(type T setter)(s string) T {
var r T
r.Set(s)
return r
}
type MyInt int
func (p *MyInt) Set(s string) {
v, err := strconv.Atoi(s)
if err != nil {
log.Fatal("Init failed", err)
}
*p = MyInt(v)
}
// INVALID
// MyInt does not have a Set method, only *MyInt has one.
var Init1 = Init(MyInt)("1")
// DOES NOT WORK
// r in Init is type *MyInt with value nil,
// so the Set method does a nil pointer deference.
var Init2 = Init(*MyInt)("2")
```
The function `Init` cannot be instantiated with the type `MyInt`, as
that type does not have a method `Set`; only `*MyInt` has `Set`.
But instantiating `Init` with `*MyInt` doesn't work either, as then
the local variable `r` in `Init` is a value of type `*MyInt`
initialized to the zero value, which for a pointer is `nil`.
The `Init` function then invokes the `Set` method on a `nil` pointer,
causing a `nil` pointer dereference at the line `*p = MyInt(v)`.
In order to permit this kind of code, contracts permit specifying that
for a type parameter `T` the pointer type `*T` has a method.
```Go
contract setter(T) {
*T Set(string)
}
```
With this definition of `setter`, instantiating `Init` with `MyInt` is
valid and the code works.
The local variable `r` has type `MyInt`, and the address of `r` is
passed as the receiver of the `Set` pointer method.
Instantiating `Init` with `*MyInt` is now invalid, as the type
`**MyInt` does not have a method `Set`.
Listing a `*T` method in a contract means that the method must be on
the type `*T`, and it means that the parameterized function is only
permitted to call the method on an addressable value of type `T`.
#### Pointer or value methods
If a method is listed in a contract with a plain `T` rather than `*T`,
then it may be either a pointer method or a value method of `T`.
In order to avoid worrying about this distinction, in a generic
function body all method calls will be pointer method calls.
If necessary, the function body will insert temporary variables,
not seen by the user, in order to get an addressable variable to use
to call the method.
For example, this code is valid, even though `LookupAsString` calls
`String` in a context that requires a value method, and `MyInt` only
has a pointer method.
```Go
func LookupAsString(type T stringer)(m map[int]T, k int) string {
return m[k].String() // Note: calls method on value of type T
}
type MyInt int
func (p *MyInt) String() { return strconv.Itoa(int(*p)) }
func F(m map[int]MyInt) string {
return LookupAsString(MyInt)(m, 0)
}
```
This makes it easier to understand which types satisfy a contract, and
how a contract may be used.
It has the drawback that in some cases a pointer method that modifies
the value to which the receiver points may be called on a temporary
variable that is discarded after the method completes.
It may be possible to add a vet warning for a case where a generic
function uses a temporary variable for a method call and the function
is instantiated with a type that has only a pointer method, not a
value method.
(Note: we should revisit this decision if it leads to confusion or
incorrect code.)
#### Operators
Method calls are not sufficient for everything we want to express.
Consider this simple function that returns the smallest element of a
slice of values, where the slice is assumed to be non-empty.
```Go
// This function is INVALID.
func Smallest(type T)(s []T) T {
r := s[0] // panics if slice is empty
for _, v := range s[1:] {
if v < r { // INVALID
r = v
}
}
return r
}
```
Any reasonable generics implementation should let you write this
function.
The problem is the expression `v < r`.
This assumes that `T` supports the `<` operator, but there is no
contract requiring that.
Without a contract the function body can only use operations that are
available for all types, but not all Go types support `<`.
It follows that we need a way to write a contract that accepts only
types that support `<`.
In order to do that, we observe that, aside from two exceptions that
we will discuss later, all the arithmetic, comparison, and logical
operators defined by the language may only be used with types that are
predeclared by the language, or with defined types whose underlying
type is one of those predeclared types.
That is, the operator `<` can only be used with a predeclared type
such as `int` or `float64`, or a defined type whose underlying type is
one of those types.
Go does not permit using `<` with an aggregate type or with an
arbitrary defined type.
This means that rather than try to write a contract for `<`, we can
approach this the other way around: instead of saying which operators
a contract should support, we can say which (underlying) types a
contract should accept.
#### Types in contracts
A contract may list explicit types that may be used as type
arguments.
These are expressed in the form `type-parameter-name type, type...`.
The `type` must be a predeclared type, such as `int`, or an aggregate
as discussed below.
For example,
```Go
contract SignedInteger(T) {
T int, int8, int16, int32, int64
}
```
This contract specifies that the type argument must be one of the
listed types (`int`, `int8`, and so forth), or it must be a defined
type whose underlying type is one of the listed types.
When a parameterized function using this contract has a value of type
`T`, it may use any operation that is permitted by all of the listed
types.
This can be an operation like `<`, or for aggregate types an operation
like `range` or `<-`.
If the function can be compiled successfully using each type listed in
the contract, then the operation is permitted.
For the `Smallest` example shown earlier, we could use a contract like
this:
```Go
contract Ordered(T) {
T int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr,
float32, float64,
string
}
```
(In practice this contract would likely be defined and exported in a
new standard library package, `contracts`, so that it could be used by
function and type and contract definitions.)
Given that contract, we can write this function, now valid:
```Go
func Smallest(type T Ordered)(s []T) T {
r := s[0] // panics if slice is empty
for _, v := range s[1:] {
if v < r {
r = v
}
}
return r
}
```
#### Conjunction and disjunction in contracts
The use of comma to separate types is a general mechanism.
A contract can be considered as a set of constraints, where the
constraints are either methods or types.
Separating constraints by a semicolon or newline means that the
constraints are a conjunction: each constraint must be satisfied.
Separating constraints by a comma means that the constraints are a
disjunction: at least one of the constraints must be satisified.
With a conjunction of constraints in a contract, a generic function
may use any operation permitted by at least one of the constraints.
With a disjunction, a generic function may use any operation permitted
by all of the constraints.
Syntactically, the type parameter being constrained must be listed for
each individual conjunction constraint, but only once for the
disjunction constraints.
Normally methods will be listed as a conjunction, separated by a
semicolon or newline.
```Go
// PrintStringer1 and PrintStringer2 are equivalent.
contract PrintStringer1(T) {
T String() string
T Print()
}
contract PrintStringer2(T) {
T String() string; T Print()
}
```
Normally builtin types will be listed as a disjunction, separated by
commas.
```Go
contract Float(T) {
T float32, float64
}
```
However, this is not required.
For example:
```Go
contract IOCloser(S) {
S Read([]byte) (int, error), // note trailing comma
Write([]byte) (int, error)
S Close() error
}
```
This contract accepts any type that has a `Close` method and also has
either a `Read` or a `Write` method (or both).
To put it another way, it accepts any type that implements either
`io.ReadCloser` or `io.WriteCloser` (or both).
In a generic function using this contract permits calling the
`Close` method, but calling the `Read` or `Write` method requires a
type assertion to an interface type.
It's not clear whether this is useful, but it is valid.
Another, pedantic, example:
```Go
contract unsatisfiable(T) {
T int
T uint
}
```
This contract permits any type that is both `int` and `uint`.
Since there is no such type, the contract does not permit any type.
This is valid but useless.
#### Both types and methods in contracts
A contract may list both builtin types and methods, typically using
conjunctions and disjunctions as follows:
```Go
contract StringableSignedInteger(T) {
T int, int8, int16, int32, int64
T String() string
}
```
This contract permits any type defined as one of the listed types,
provided it also has a `String() string` method.
Although the `StringableSignedInteger` contract explicitly lists
`int`, the type `int` is not permitted as a type argument, since `int`
does not have a `String` method.
An example of a type argument that would be permitted is `MyInt`,
defined as:
```Go
type MyInt int
func (mi MyInt) String() string {
return fmt.Sprintf("MyInt(%d)", mi)
}
```
#### Aggregate types in contracts
A type in a contract need not be a predeclared type; it can be a type
literal composed of predeclared types.
```Go
contract byteseq(T) {
T string, []byte
}
```
The same rules apply.
The type argument for this contract may be `string` or `[]byte` or a
type whose underlying type is one of those.
A parameterized function with this contract may use any operation
permitted by both `string` and `[]byte`.
Given these definitions
```Go
type MyByte byte
type MyByteAlias = byte
```
the `byteseq` contract is satisfied by any of `string`, `[]byte`,
`[]MyByte`, `[]MyByteAlias`.
The `byteseq` contract permits writing generic functions that work
for either `string` or `[]byte` types.
```Go
func Join(type T byteseq)(a []T, sep T) (ret T) {
if len(a) == 0 {
// Use the result parameter as a zero value;
// see discussion of zero value below.
return ret
}
if len(a) == 1 {
return T(append([]byte(nil), a[0]...))
}
n := len(sep) * (len(a) - 1)
for i := 0; i < len(a); i++ {
n += len(a[i]) // len works for both string and []byte
}
b := make([]byte, n)
bp := copy(b, a[0])
for _, s := range a[1:] {
bp += copy(b[bp:], sep)
bp += copy(b[bp:], s)
}
return T(b)
}
```
#### Aggregates of type parameters in contracts
A type literal in a contract can refer not only to predeclared types,
but also to type parameters.
In this example, the `Slice` contract takes two parameters.
The first type parameter is required to be a slice of the second.
There are no constraints on the second type parameter.
```Go
contract Slice(S, Element) {
S []Element
}
```
We can use the `Slice` contract to define a function that takes an
argument of a slice type and returns a result of that same type.
```Go
func Map(type S, Element Slice)(s S, f func(Element) Element) S {
r := make(S, len(s))
for i, v := range s {
r[i] = f(v)
}
return r
}
type MySlice []int
func DoubleMySlice(s MySlice) MySlice {
v := Map(MySlice, int)(s, func(e int) int { return 2 * e })
// Here v has type MySlice, not type []int.
return v
}
```
(Note: the type inference rules described above do not permit
inferring both `MySlice` and `int` when `DoubleMySlice` calls `Map`.
It may be worth extending them, to make it easier to use functions
that are careful to return the same result type as input type.
Similarly, we would consider extending the type inference rules to
permit inferring the type `Edge` from the type `Node` in the
`graph.New` example shown earlier.)
To avoid a parsing ambiguity, when a type literal in a contract refers
to a parameterized type, extra parentheses are required, so that it is
not confused with a method.
```Go
type M(type T) []T
contract C(T) {
T M(T) // T must implement the method M with an argument of type T
T (M(T)) // T must be the type M(T)
}
```
#### Comparable types in contracts
Earlier we mentioned that there are two exceptions to the rule that
operators may only be used with types that are predeclared by the
language.
The exceptions are `==` and `!=`, which are permitted for struct,
array, and interface types.
These are useful enough that we want to be able to write a contract
that accepts any comparable type.
To do this we introduce a new predeclared contract: `comparable`.
The `comparable` contract takes a single type parameter.
It accepts as a type argument any comparable type.
It permits in a parameterized function the use of `==` and `!=` with
values of that type parameter.
As a predeclared contract, `comparable` may be used in a function or
type definition, or it may be embedded in another contract.
For example, this function may be instantiated with any comparable
type:
```Go
func Index(type T comparable)(s []T, x T) int {
for i, v := range s {
if v == x {
return i
}
}
return -1
}
```
#### Observations on types in contracts
It may seem awkward to explicitly list types in a contract, but it is
clear both as to which type arguments are permitted at the call site,
and which operations are permitted by the parameterized function.
If the language later changes to support operator methods (there are
no such plans at present), then contracts will handle them as they do
any other kind of method.
There will always be a limited number of predeclared types, and a
limited number of operators that those types support.
Future language changes will not fundamentally change those facts, so
this approach will continue to be useful.
This approach does not attempt to handle every possible operator.
For example, there is no way to usefully express the struct field
reference `.` or the general index operator `[]`.
The expectation is that those will be handled using aggregate types in
a parameterized function definition, rather than requiring aggregate
types as a type argument.
For example, we expect functions that want to index into a slice to be
parameterized on the slice element type `T`, and to use parameters or
variables of type `[]T`.
As shown in the `DoubleMySlice` example above, this approach makes it
awkward to write generic functions that accept and return an aggregate
type and want to return the same result type as their argument type.
Defined aggregate types are not common, but they do arise.
This awkwardness is a weakness of this approach.
#### Type conversions
In a function with two type parameters `From` and `To`, a value of
type `From` may be converted to a value of type `To` if all the
types accepted by `From`'s contract can be converted to all the
types accepted by `To`'s contract.
If either type parameter does not accept types, then type conversions
are not permitted.
For example:
```Go
contract integer(T) {
T int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr
}
contract integer2(T1, T2) {
integer(T1)
integer(T2)
}
func Convert(type To, From integer2)(from From) To {
to := To(from)
if From(to) != from {
panic("conversion out of range")
}
return to
}
```
The type conversions in `Convert` are permitted because Go permits
every integer type to be converted to every other integer type.
#### Untyped constants
Some functions use untyped constants.
An untyped constant is permitted with a value of some type parameter
if it is permitted with every type accepted by the type parameter's
contract.
```Go
contract integer(T) {
T int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr
}
func Add10(type T integer)(s []T) {
for i, v := range s {
s[i] = v + 10 // OK: 10 can convert to any integer type
}
}
// This function is INVALID.
func Add1024(type T integer)(s []T) {
for i, v := range s {
s[i] = v + 1024 // INVALID: 1024 not permitted by int8/uint8
}
}
```
### Implementation
Russ Cox [famously observed](https://research.swtch.com/generic) that
generics require choosing among slow programmers, slow compilers, or
slow execution times.
We believe that this design permits different implementation choices.
Code may be compiled separately for each set of type arguments, or it
may be compiled as though each type argument is handled similarly to
an interface type with method calls, or there may be some combination
of the two.
In other words, this design permits people to stop choosing slow
programmers, and permits the implementation to decide between slow
compilers (compile each set of type arguments separately) or slow
execution times (use method calls for each operation on a value of a
type argument).
### Summary
While this document is long and detailed, the actual design reduces to
a few major points.
* Functions and types can have type parameters, which are defined
using optional contracts.
* Contracts describe the methods required and the builtin types
permitted for a type argument.
* Contracts describe the methods and operations permitted for a type
parameter.
* Type inference will often permit omitting type arguments when
calling functions with type parameters.
This design is completely backward compatible, in that any valid Go 1
program will still be valid if this design is adopted (assuming
`contract` is treated as a pseudo-keyword that is only meaningful at
top level).
We believe that this design addresses people's needs for generic
programming in Go, without making the language any more complex than
necessary.
We can't truly know the impact on the language without years of
experience with this design.
That said, here are some speculations.
#### Complexity
One of the great aspects of Go is its simplicity.
Clearly this design makes the language more complex.
We believe that the increased complexity is small for people reading
well written generic code, rather than writing it.
Naturally people must learn the new syntax for declaring type
parameters.
The code within a generic function reads like ordinary Go code, as can
be seen in the examples below.
It is an easy shift to go from `[]int` to `[]T`.
Type parameter contracts serve effectively as documentation,
describing the type.
We expect that most people will not write generic code themselves, but
many people are likely to write packages that use generic code written
by others.
In the common case, generic functions work exactly like non-generic
functions: you simply call them.
Type inference means that you do not have to write out the type
arguments explicitly.
The type inference rules are designed to be unsurprising: either the
type arguments are deduced correctly, or the call fails and requires
explicit type parameters.
Type inference uses type identity, with no attempt to resolve two
types that are similar but not identical, which removes significant
complexity.
People using generic types will have to pass explicit type arguments.
The syntax for this is familiar.
The only change is passing arguments to types rather than only to
functions.
In general, we have tried to avoid surprises in the design.
Only time will tell whether we succeeded.
#### Pervasiveness
We expect that a few new packages will be added to the standard
library.
A new `slices` packages will be similar to the existing bytes and
strings packages, operating on slices of any element type.
New `maps` and `chans` packages will provide simple algorithms that
are currently duplicated for each element type.
A `set` package may be added.
A new `contracts` packages will provide standard embeddable contracts,
such as contracts that permit all integer types or all numeric types.
Packages like `container/list` and `container/ring`, and types like
`sync.Map`, will be updated to be compile-time type-safe.
The `math` package will be extended to provide a set of simple
standard algorithms for all numeric types, such as the ever popular
`Min` and `Max` functions.
It is likely that new special purpose compile-time type-safe container
types will be developed, and some may become widely used.
We do not expect approaches like the C++ STL iterator types to become
widely used.
In Go that sort of idea is more naturally expressed using an interface
type.
In C++ terms, using an interface type for an iterator can be seen as
carrying an abstraction penalty, in that run-time efficiency will be
less than C++ approaches that in effect inline all code; we believe
that Go programmers will continue to find that sort of penalty to be
acceptable.
As we get more container types, we may develop a standard `Iterator`
interface.
That may in turn lead to pressure to modify the language to add some
mechanism for using an `Iterator` with the `range` clause.
That is very speculative, though.
#### Efficiency
It is not clear what sort of efficiency people expect from generic
code.
Generic functions, rather than generic types, can probably be compiled
using an interface-based approach.
That will optimize compile time, in that the package is only compiled
once, but there will be some run time cost.
Generic types may most naturally be compiled multiple times for each
set of type arguments.
This will clearly carry a compile time cost, but there shouldn't be
any run time cost.
Compilers can also choose to implement generic types similarly to
interface types, using special purpose methods to access each element
that depends on a type parameter.
Only experience will show what people expect in this area.
#### Omissions
We believe that this design covers the basic requirements for
generic programming.
However, there are a number of programming constructs that are not
supported.
* No specialization.
There is no way to write multiple versions of a generic function
that are designed to work with specific type arguments (other than
using type assertions or type switches).
* No metaprogramming.
There is no way to write code that is executed at compile time to
generate code to be executed at run time.
* No higher level abstraction.
There is no way to speak about a function with type arguments other
than to call it or instantiate it.
There is no way to speak about a parameterized type other than to
instantiate it.
* No general type description.
For operator support contracts use specific types, rather than
describing the characteristics that a type must have.
This is easy to understand but may be limiting at times.
* No covariance or contravariance.
* No operator methods.
You can write a generic container that is compile-time type-safe,
but you can only access it with ordinary methods, not with syntax
like `c[k]`.
Similarly, there is no way to use `range` with a generic container
type.
* No currying.
There is no way to specify only some of the type arguments, other
than by using a type alias or a helper function.
* No adaptors.
There is no way for a contract to define adaptors that could be used
to support type arguments that do not already satisfy the contract,
such as, for example, defining an `==` operator in terms of an
`Equal` method, or vice-versa.
* No parameterization on non-type values such as constants.
This arises most obviously for arrays, where it might sometimes be
convenient to write `type Matrix(type n int) [n][n]float64`.
It might also sometimes be useful to specify significant values for
a container type, such as a default value for elements.
#### Issues
There are some issues with this design that deserve a more detailed
discussion.
We think these issues are relatively minor compared to the design as a
whole, but they still deserve a complete hearing and discussion.
##### The zero value
This design has no simple expression for the zero value of a type
parameter.
For example, consider this implementation of optional values that uses
pointers:
```Go
type Optional(type T) struct {
p *T
}
func (o Optional(T)) Val() T {
if o.p != nil {
return *o.p
}
var zero T
return zero
}
```
In the case where `o.p == nil`, we want to return the zero value of
`T`, but we have no way to write that.
It would be nice to be able to write `return nil`, but that wouldn't
work if `T` is, say, `int`; in that case we would have to write
`return 0`.
And, of course, there is no contract to support either `return nil` or
`return 0`.
Some approaches to this are:
* Use `var zero T`, as above, which works with the existing design
but requires an extra statement.
* Use `*new(T)`, which is ugly but works with the existing design.
* For results only, name the result parameter `_`, and use a naked
`return` statement to return the zero value.
* Extend the design to permit using `nil` as the zero value of any
generic type (but see [issue 22729](https://golang.org/issue/22729)).
* Extend the design to permit using `T{}`, where `T` is a type
parameter, to indicate the zero value of the type.
* Change the language to permit using `_` on the right hand of an
assignment (including `return` or a function call) as proposed in
[issue 19642](https://golang.org/issue/19642).
We feel that more experience with this design is needed before
deciding what, if anything, to do here.
##### Lots of irritating silly parentheses
Calling a function with type parameters requires an additional list of
type arguments if the type arguments can not be inferred.
If the function returns a function, and we call that, we get still
more parentheses.
```Go
F(int, float64)(x, y)(s)
```
We experimented with other syntaxes, such as using a colon to separate
the type arguments from the regular arguments.
The current design seems to be the nicest, but perhaps something
better is possible.
##### Pointer vs. value methods in contracts
Contracts do not provide a way to distinguish between pointer and
value methods, so types that provide either will satisfy a contract.
This in turn requires that parameterized functions always permit
either kind of method.
This may be confusing, in that a parameterized function may invoke a
pointer method on a temporary value; if the pointer method changes the
value to which the receiver points, those changes will be lost.
We will have to judge from experience how much this confuses people in
practice.
##### Defined aggregate types
As discussed above, an extra type parameter is required for a function
to take, as an argument, a defined type whose underlying type is an
aggregate type, and to return the same defined type as a result.
For example, this function will map a function across a slice.
```Go
func Map(type Element)(s []Element, f func(Element) Element) []Element {
r := make([]Element, len(s))
for i, v := range s {
r[i] = f(v)
}
return r
}
```
However, when called on a defined type, it will return a slice of the
element type of that type, rather than the defined type itself.
```Go
type MySlice []int
func DoubleMySlice(s MySlice) MySlice {
s2 := Map(s, func(e int) int { return 2 * e })
// Here s2 is type []int, not type MySlice.
return MySlice(s2)
}
```
As discussed above with an example, this can be avoided by using an
extra type parameter for `Map`, and using a contract that describes
the required relationship between the slice and element types.
This works but is awkward.
##### Identifying the matched predeclared type
In this design we suggest permitting type assertions and type switches
on values whose types are based on type parameters, but those type
assertions and switches would always test the actual type argument.
The design doesn't provide any way to test the contract type matched
by the type argument.
Here is an example that shows the difference.
```Go
contract Float(F) {
F float32, float64
}
func NewtonSqrt(type F Float)(v F) F {
var iterations int
switch v.(type) {
case float32:
iterations = 4
case float64:
iterations = 5
default:
panic(fmt.Sprintf("unexpected type %T", v))
}
// Code omitted.
}
type MyFloat float32
var G = NewtonSqrt(MyFloat(64))
```
This code will panic when initializing `G`, because the type of `v` in
the `NewtonSqrt` function will be `MyFloat`, not `float32` or
`float64`.
What this function actually wants to test is not the type of `v`, but
the type that `v` matched in the contract.
One way to handle this would be to permit type switches on the type
`F`, rather than the value `v`, with the proviso that the type `F`
would always match a type defined in the contract.
This kind of type switch would only be permitted if the contract does
list explicit types, and only types listed in the contract would be
permitted as cases.
If we took this approach, we would stop permitting type assertions and
switches on values whose type is based on a type parameter.
Those assertions and switches can always be done by first converting
the value to the empty interface type.
A different approach would be that if a contract specifies any types
for a type parameter, then let type switches and assertions on values
whose type is, or is based on, that type parameter to match only the
types listed in the type parameter's contract.
It is still possible to match the value's actual type by first
converting it to the `interface{}` type and then doing the type
assertion or switch.
#### Discarded ideas
This design is not perfect, and it will be changed as we gain
experience with it.
That said, there are many ideas that we've already considered in
detail.
This section lists some of those ideas in the hopes that it will help
to reduce repetitive discussion.
The ideas are presented in the form of a FAQ.
##### Why not use interfaces instead of contracts?
_The interface method syntax is familiar._
_Why introduce another way to write methods?_
Contracts, unlike interfaces, support multiple types, including
describing ways that the types refer to each other.
It is unclear how to represent operators using interface methods.
We considered syntaxes like `+(T, T) T`, but that is confusing and
repetitive.
Also, a minor point, but `==(T, T) bool` does not correspond to the
`==` operator, which returns an untyped boolean value, not `bool`.
We also considered writing simply `+` or `==`.
That seems to work but unfortunately the semicolon insertion rules
require writing a semicolon after each operator at the end of a line.
Using contracts that look like functions gives us a familiar syntax at
the cost of some repetition.
These are not fatal problems, but they are difficulties.
More seriously, a contract is a relationship between the definition of
a generic function and the callers of that function.
To put it another way, it is a relationship between a set of type
parameters and a set of type arguments.
The contract defines how values of the type parameters may be used,
and defines the requirements on the type arguments.
That is why it is called a contract: because it defines the behavior
on both sides.
An interface is a type, not a relationship between function
definitions and callers.
A program can have a value of an interface type, but it makes no sense
to speak of a value of a contract type.
A value of interface type has both a static type (the interface type)
and a dynamic type (some non-interface type), but there is no similar
concept for contracts.
In other words, contracts are not extensions of interface types.
There are things you can do with a contract that you cannot do with an
interface type, and there are things you can do with an interace type
that you cannot do with a contract.
It is true that a contract that has a single type parameter and that
lists only methods, not builtin types, for that type parameter, looks
similar to an interface type.
But all the similarity amounts to is that both provide a list of
methods.
We could consider permitting using an interface type as a contract
with a single type parameter that lists only methods.
But that should not mislead us into thinking that contracts are
interfaces.
##### Why not permit contracts to describe a type?
_In order to use operators contracts have to explicitly and tediously_
_list types._
_Why not permit them to describe a type?_
There are many different ways that a Go type can be used.
While it is possible to invent notation to describe the various
operations in a contract, it leads to a proliferation of additional
syntactic constructs, making contracts complicated and hard to read.
The approach used in this design is simpler and relies on only a few
new syntactic constructs and names.
##### Why not put type parameters on packages?
We investigated this extensively.
It becomes problematic when you want to write a `list` package, and
you want that package to include a `Transform` function that converts
a `List` of one element type to a `List` of another element type.
It's very awkward for a function in one instantiation of a package to
return a type that requires a different instantiation of the package.
It also confuses package boundaries with type definitions.
There is no particular reason to think that the uses of parameterized
types will break down neatly into packages.
Sometimes they will, sometimes they won't.
##### Why not use the syntax `F<T>` like C++ and Java?
When parsing code within a function, such as `v := F<T>`, at the point
of seeing the `<` it's ambiguous whether we are seeing a type
instantiation or an expression using the `<` operator.
Resolving that requires effectively unbounded lookahead.
In general we strive to keep the Go parser simple.
##### Why not use the syntax `F[T]`?
When parsing a type declaration `type A [T] int` it's ambiguous
whether this is a parameterized type defined (uselessly) as `int` or
whether it is an array type with `T` elements.
However, this would be addressed by requiring `type A [type T] int`
for a parameterized type.
Parsing declarations like `func f(A[T]int)` (a single parameter of
type `[T]int`) and `func f(A[T], int)` (two parameters, one of type
`A[T]` and one of type `int`) show that some additional parsing
lookahead is required.
This is solvable but adds parsing complexity.
The language generally permits a trailing comma in a comma-separated
list, so `A[T,]` should be permitted if `A` is a parameterized type,
but normally would not be permitted for an index expression.
However, the parser can't know whether `A` is a parameterized type or
a value of slice, array, or map type, so this parse error can not be
reported until after type checking is complete.
Again, solvable but complicated.
More generally, we felt that the square brackets were too intrusive on
the page and that parentheses were more Go like.
We will reevaluate this decision as we gain more experience.
##### Why not use `F«T»`?
We considered it but we couldn't bring ourselves to require
non-ASCII.
##### Why not define contracts in a standard package?
_Instead of writing out contracts, use names like_
_`contracts.Arithmetic` and `contracts.Comparable`._
Listing all the possible combinations of types gets rather lengthy.
It also introduces a new set of names that not only the writer of
generic code, but, more importantly, the reader, must remember.
One of the driving goals of this design is to introduce as few new
names as possible.
In this design we introduce one new keyword and one new predefined
name.
We expect that if people find such names useful, we can introduce a
package `contracts` that defines the useful names in the form of
contracts that can be used by other types and functions and embedded
in other contracts.
#### Comparison with Java
Most complaints about Java generics center around type erasure.
This design does not have type erasure.
The reflection information for a generic type will include the full
compile-time type information.
In Java type wildcards (`List<? extends Number>`, `List<? super
Number>`) implement covariance and contravariance.
These concepts are missing from Go, which makes generic types much
simpler.
#### Comparison with C++
C++ templates do not enforce any constraints on the type arguments
(unless the concept proposal is adopted).
This means that changing template code can accidentally break far-off
instantiations.
It also means that error messages are reported only at instantiation
time, and can be deeply nested and difficult to understand.
This design avoids these problems through explicit contracts.
C++ supports template metaprogramming, which can be thought of as
ordinary programming done at compile time using a syntax that is
completely different than that of non-template C++.
This design has no similar feature.
This saves considerable complexity while losing some power and run
time efficiency.
### Examples
The following sections are examples of how this design could be used.
This is intended to address specific areas where people have created
user experience reports concerned with Go's lack of generics.
#### Sort
Before the introduction of `sort.Slice`, a common complaint was the
need for boilerplate definitions in order to use `sort.Sort`.
With this design, we can add to the sort package as follows:
```Go
type orderedSlice(type Elem Ordered) []Elem
func (s orderedSlice(Elem)) Len() int { return len(s) }
func (s orderedSlice(Elem)) Less(i, j int) bool { return s[i] < s[j] }
func (s orderedSlice(Elem)) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// OrderedSlice sorts the slice s in ascending order.
// The elements of s must be ordered using the < operator.
func OrderedSlice(type Elem Ordered)(s []Elem) {
sort.Sort(orderedSlice(Elem)(s))
}
```
Now we can write:
```Go
sort.OrderedSlice(int32)([]int32{3, 5, 2})
```
We can rely on type inference to omit the type argument list:
```Go
sort.OrderedSlice([]string{"a", "c", "b"})
```
Along the same lines, we can add a function for sorting using a
comparison function, similar to `sort.Slice` but writing the function
to take values rather than slice indexes.
```Go
type sliceFn(type Elem) struct {
s []Elem
f func(Elem, Elem) bool
}
func (s sliceFn(Elem)) Len() int { return len(s.s) }
func (s sliceFn(Elem)) Less(i, j int) bool { return s.f(s.s[i], s.s[j]) }
func (s sliceFn(Elem)) Swap(i, j int) { s.s[i], s.s[j] = s.s[j], s.s[i] }
// SliceFn sorts the slice s according to the function f.
func SliceFn(type Elem)(s []Elem, f func(Elem, Elem) bool) {
Sort(sliceFn(Elem){s, f})
}
```
An example of calling this might be:
```Go
var s []*Person
// ...
sort.SliceFn(s, func(p1, p2 *Person) bool { return p1.Name < p2.Name })
```
#### Map keys
Here is how to get a slice of the keys of any map.
```Go
package maps
// Keys returns the keys of the map m.
// Note that map keys (here called type K) must be comparable.
func Keys(type K, V comparable(K))(m map[K]V) []K {
r := make([]K, 0, len(m))
for k := range m {
r = append(r, k)
}
return r
}
```
In typical use the types will be inferred.
```Go
k := maps.Keys(map[int]int{1:2, 2:4}) // sets k to []int{1, 2} (or {2, 1})
```
#### Map/Reduce/Filter
Here is an example of how to write map, reduce, and filter functions
for slices.
These functions are intended to correspond to the similar functions in
Lisp, Python, Java, and so forth.
```Go
// Package slices implements various slice algorithms.
package slices
// Map turns a []T1 to a []T2 using a mapping function.
func Map(type T1, T2)(s []T1, f func(T1) T2) []T2 {
r := make([]T2, len(s))
for i, v := range s {
r[i] = f(v)
}
return r
}
// Reduce reduces a []T1 to a single value using a reduction function.
func Reduce(type T1, T2)(s []T1, initializer T2, f func(T2, T1) T2) T2 {
r := initializer
for _, v := range s {
r = f(r, v)
}
return r
}
// Filter filters values from a slice using a filter function.
func Filter(type T)(s []T, f func(T) bool) []T {
var r []T
for _, v := range s {
if f(v) {
r = append(r, v)
}
}
return r
}
```
Example calls:
```Go
s := []int{1, 2, 3}
floats := slices.Map(s, func(i int) float64 { return float64(i) })
sum := slices.Reduce(s, 0, func(i, j int) int { return i + j })
evens := slices.Filter(s, func(i int) bool { return i%2 == 0 })
```
#### Sets
Many people have asked for Go's builtin map type to be extended, or
rather reduced, to support a set type.
Here is a type-safe implementation of a set type, albeit one that uses
methods rather than operators like `[]`.
```Go
// Package set implements sets of any type.
package set
type Set(type Elem comparable) map[Elem]struct{}
func Make(type Elem comparable)() Set(Elem) {
return make(Set(Elem))
}
func (s Set(Elem)) Add(v Elem) {
s[v] = struct{}{}
}
func (s Set(Elem)) Delete(v Elem) {
delete(s, v)
}
func (s Set(Elem)) Contains(v Elem) bool {
_, ok := s[v]
return ok
}
func (s Set(Elem)) Len() int {
return len(s)
}
func (s Set(Elem)) Iterate(f func(Elem)) {
for v := range s {
f(v)
}
}
```
Example use:
```Go
s := set.Make(int)()
s.Add(1)
if s.Contains(2) { panic("unexpected 2") }
```
This example, like the sort examples above, show how to use this
design to provide a compile-time type-safe wrapper around an
existing API.
#### Channels
Many simple general purpose channel functions are never written,
because they must be written using reflection and the caller must type
assert the results.
With this design they become easy to write.
```Go
package chans
import "runtime"
// Ranger returns a Sender and a Receiver. The Receiver provides a
// Next method to retrieve values. The Sender provides a Send method
// to send values and a Close method to stop sending values. The Next
// method indicates when the Sender has been closed, and the Send
// method indicates when the Receiver has been freed.
//
// This is a convenient way to exit a goroutine sending values when
// the receiver stops reading them.
func Ranger(type T)() (*Sender(T), *Receiver(T)) {
c := make(chan T)
d := make(chan bool)
s := &Sender(T){values: c, done: d}
r := &Receiver(T){values: c, done: d}
runtime.SetFinalizer(r, r.finalize)
return s, r
}
// A sender is used to send values to a Receiver.
type Sender(type T) struct {
values chan<- T
done <-chan bool
}
// Send sends a value to the receiver. It returns whether any more
// values may be sent; if it returns false the value was not sent.
func (s *Sender(T)) Send(v T) bool {
select {
case s.values <- v:
return true
case <-s.done:
return false
}
}
// Close tells the receiver that no more values will arrive.
// After Close is called, the Sender may no longer be used.
func (s *Sender(T)) Close() {
close(s.values)
}
// A Receiver receives values from a Sender.
type Receiver(type T) struct {
values <-chan T
done chan<- bool
}
// Next returns the next value from the channel. The bool result
// indicates whether the value is valid, or whether the Sender has
// been closed and no more values will be received.
func (r *Receiver(T)) Next() (T, bool) {
v, ok := <-r.values
return v, ok
}
// finalize is a finalizer for the receiver.
func (r *Receiver(T)) finalize() {
close(r.done)
}
```
There is an example of using this function in the next section.
#### Containers
One of the frequent requests for generics in Go is the ability to
write compile-time type-safe containers.
This design makes it easy to write a compile-time type-safe wrapper
around an existing container; we won't write out an example for that.
This design also makes it easy to write a compile-time type-safe
container that does not use boxing.
Here is an example of an ordered map implemented as a binary tree.
The details of how it works are not too important.
The important points are:
* The code is written in a natural Go style, using the key and value
types where needed.
* The keys and values are stored directly in the nodes of the tree,
not using pointers and not boxed as interface values.
```Go
// Package orderedmap provides an ordered map, implemented as a binary tree.
package orderedmap
import "chans"
// Map is an ordered map.
type Map(type K, V) struct {
root *node(K, V)
compare func(K, K) int
}
// node is the type of a node in the binary tree.
type node(type K, V) struct {
key K
val V
left, right *node(K, V)
}
// New returns a new map.
func New(type K, V)(compare func(K, K) int) *Map(K, V) {
return &Map(K, V){compare: compare}
}
// find looks up key in the map, and returns either a pointer
// to the node holding key, or a pointer to the location where
// such a node would go.
func (m *Map(K, V)) find(key K) **node(K, V) {
pn := &m.root
for *pn != nil {
switch cmp := m.compare(key, (*pn).key); {
case cmp < 0:
pn = &(*pn).left
case cmp > 0:
pn = &(*pn).right
default:
return pn
}
}
return pn
}
// Insert inserts a new key/value into the map.
// If the key is already present, the value is replaced.
// Returns true if this is a new key, false if already present.
func (m *Map(K, V)) Insert(key K, val V) bool {
pn := m.find(key)
if *pn != nil {
(*pn).val = val
return false
}
*pn = &node(K, V){key: key, val: val}
return true
}
// Find returns the value associated with a key, or zero if not present.
// The found result reports whether the key was found.
func (m *Map(K, V)) Find(key K) (V, bool) {
pn := m.find(key)
if *pn == nil {
var zero V // see the discussion of zero values, above
return zero, false
}
return (*pn).val, true
}
// keyValue is a pair of key and value used when iterating.
type keyValue(type K, V) struct {
key K
val V
}
// InOrder returns an iterator that does an in-order traversal of the map.
func (m *Map(K, V)) InOrder() *Iterator(K, V) {
sender, receiver := chans.Ranger(keyValue(K, V))()
var f func(*node(K, V)) bool
f = func(n *node(K, V)) bool {
if n == nil {
return true
}
// Stop sending values if sender.Send returns false,
// meaning that nothing is listening at the receiver end.
return f(n.left) &&
sender.Send(keyValue(K, V){n.key, n.val}) &&
f(n.right)
}
go func() {
f(m.root)
sender.Close()
}()
return &Iterator{receiver}
}
// Iterator is used to iterate over the map.
type Iterator(type K, V) struct {
r *chans.Receiver(keyValue(K, V))
}
// Next returns the next key and value pair, and a boolean indicating
// whether they are valid or whether we have reached the end.
func (it *Iterator(K, V)) Next() (K, V, bool) {
keyval, ok := it.r.Next()
if !ok {
var zerok K
var zerov V
return zerok, zerov, false
}
return keyval.key, keyval.val, true
}
```
This is what it looks like to use this package:
```Go
import "container/orderedmap"
var m = orderedmap.New(string, string)(strings.Compare)
func Add(a, b string) {
m.Insert(a, b)
}
```
#### Append
The predeclared `append` function exists to replace the boilerplate
otherwise required to grow a slice.
Before `append` was added to the language, there was a function `Add`
in the bytes package with the signature
```Go
func Add(s, t []byte) []byte
```
that appended two `[]byte` values together, returning a new slice.
That was fine for `[]byte`, but if you had a slice of some other
type, you had to write essentially the same code to append more
values.
If this design were available back then, perhaps we would not have
added `append` to the language.
Instead, we could write something like this:
```Go
package slices
// Append adds values to the end of a slice, returning a new slice.
func Append(type T)(s []T, t ...T) []T {
lens := len(s)
tot := lens + len(t)
if tot <= cap(s) {
s = s[:tot]
} else {
news := make([]T, tot, tot + tot/2)
copy(news, s)
s = news
}
copy(s[lens:tot], t)
return s
}
```
That example uses the predeclared `copy` function, but that's OK, we
can write that one too:
```Go
// Copy copies values from t to s, stopping when either slice is
// full, returning the number of values copied.
func Copy(type T)(s, t []T) int {
i := 0
for ; i < len(s) && i < len(t); i++ {
s[i] = t[i]
}
return i
}
```
These functions can be used as one would expect:
```Go
s := slices.Append([]int{1, 2, 3}, 4, 5, 6)
slices.Copy(s[3:], []int{7, 8, 9})
```
This code doesn't implement the special case of appending or copying a
`string` to a `[]byte`, and it's unlikely to be as efficient as the
implementation of the predeclared function.
Still, this example shows that using this design would permit append
and copy to be written generically, once, without requiring any
additional special language features.
#### Metrics
In a [Go experience
report](https://medium.com/@sameer_74231/go-experience-report-for-generics-google-metrics-api-b019d597aaa4)
Sameer Ajmani describes a metrics implementation.
Each metric has a value and one or more fields.
The fields have different types.
Defining a metric requires specifying the types of the fields, and
creating a value with an Add method.
The Add method takes the field types as arguments, and records an
instance of that set of fields.
The C++ implementation uses a variadic template.
The Java implementation includes the number of fields in the name of
the type.
Both the C++ and Java implementations provide compile-time type-safe
Add methods.
Here is how to use this design to provide similar functionality in
Go with a compile-time type-safe Add method.
Because there is no support for a variadic number of type arguments,
we must use different names for a different number of arguments, as in
Java.
This implementation only works for comparable types.
A more complex implementation could accept a comparison function to
work with arbitrary types.
```Go
package metrics
import "sync"
type Metric1(type T comparable) struct {
mu sync.Mutex
m map[T]int
}
func (m *Metric1(T)) Add(v T) {
m.mu.Lock()
defer m.mu.Unlock()
if m.m == nil {
m.m = make(map[T]int)
}
m[v]++
}
contract cmp2(T1, T2) {
comparable(T1)
comparable(T2)
}
type key2(type T1, T2 cmp2) struct {
f1 T1
f2 T2
}
type Metric2(type T1, T2 cmp2) struct {
mu sync.Mutex
m map[key2(T1, T2)]int
}
func (m *Metric2(T1, T2)) Add(v1 T1, v2 T2) {
m.mu.Lock()
defer m.mu.Unlock()
if m.m == nil {
m.m = make(map[key2(T1, T2)]int)
}
m[key(T1, T2){v1, v2}]++
}
contract cmp3(T1, T2, T3) {
comparable(T1)
comparable(T2)
comparable(T3)
}
type key3(type T1, T2, T3 cmp3) struct {
f1 T1
f2 T2
f3 T3
}
type Metric3(type T1, T2, T3 cmp3) struct {
mu sync.Mutex
m map[key3(T1, T2, T3)]int
}
func (m *Metric3(T1, T2, T3)) Add(v1 T1, v2 T2, v3 T3) {
m.mu.Lock()
defer m.mu.Unlock()
if m.m == nil {
m.m = make(map[key3]int)
}
m[key(T1, T2, T3){v1, v2, v3}]++
}
// Repeat for the maximum number of permitted arguments.
```
Using this package looks like this:
```Go
import "metrics"
var m = metrics.Metric2(string, int){}
func F(s string, i int) {
m.Add(s, i) // this call is type checked at compile time
}
```
This package implementation has a certain amount of repetition due to
the lack of support for variadic package type parameters.
Using the package, though, is easy and type safe.
#### List transform
While slices are efficient and easy to use, there are occasional cases
where a linked list is appropriate.
This example primarily shows transforming a linked list of one type to
another type, as an example of using different instantiations of the
same parameterized type.
```Go
package list
// List is a linked list.
type List(type T) struct {
head, tail *element(T)
}
// An element is an entry in a linked list.
type element(type T) struct {
next *element(T)
val T
}
// Push pushes an element to the end of the list.
func (lst *List(T)) Push(v T) {
if lst.tail == nil {
lst.head = &element(T){val: v}
lst.tail = lst.head
} else {
lst.tail.next = &element(T){val: v }
lst.tail = lst.tail.next
}
}
// Iterator ranges over a list.
type Iterator(type T) struct {
next **element(T)
}
// Range returns an Iterator starting at the head of the list.
func (lst *List(T)) Range() *Iterator(T) {
return Iterator(T){next: &lst.head}
}
// Next advances the iterator.
// It returns whether there are more elements.
func (it *Iterator(T)) Next() bool {
if *it.next == nil {
return false
}
it.next = &(*it.next).next
return true
}
// Val returns the value of the current element.
// The bool result reports whether the value is valid.
func (it *Iterator(T)) Val() (T, bool) {
if *it.next == nil {
var zero T
return zero, false
}
return (*it.next).val, true
}
// Transform runs a transform function on a list returning a new list.
func Transform(type T1, T2)(lst *List(T1), f func(T1) T2) *List(T2) {
ret := &List(T2){}
it := lst.Range()
for {
if v, ok := it.Val(); ok {
ret.Push(f(v))
}
it.Next()
}
return ret
}
```
#### Context
The standard "context" package provides a `Context.Value` method to
fetch a value from a context.
The method returns `interface{}`, so using it normally requires a type
assertion to the correct type.
Here is an example of how we can add type parameters to the "context"
package to provide a type-safe wrapper around `Context.Value`.
```Go
// Key is a key that can be used with Context.Value.
// Rather than calling Context.Value directly, use Key.Load.
//
// The zero value of Key is not ready for use; use NewKey.
type Key(type V) struct {
name string
}
// NewKey returns a key used to store values of type V in a Context.
// Every Key returned is unique, even if the name is reused.
func NewKey(type V)(name string) *Key {
return &Key(V){name: name}
}
// WithValue returns a new context with v associated with k.
func (k *Key(V)) WithValue(parent Context, v V) Context {
return WithValue(parent, k, v)
}
// Value loads the value associated with k from ctx and reports
//whether it was successful.
func (k *Key(V)) Value(ctx Context) (V, bool) {
v, present := ctx.Value(k).(V)
return v.(V), present
}
// String returns the name and expected value type.
func (k *Key(V)) String() string {
var v V
return fmt.Sprintf("%s(%T)", k.name, v)
}
```
To see how this might be used, consider the net/http package's
`ServerContextKey`:
```Go
var ServerContextKey = &contextKey{"http-server"}
// used as:
ctx := context.Value(ServerContextKey, srv)
s, present := ctx.Value(ServerContextKey).(*Server)
```
This could be written instead as
```Go
var ServerContextKey = context.NewKey(*Server)("http_server")
// used as:
ctx := ServerContextKey.WithValue(ctx, srv)
s, present := ServerContextKey.Value(ctx)
```
Code that uses `Key.WithValue` and `Key.Value` instead of
`context.WithValue` and `context.Value` does not need any type
assertions and is compile-time type-safe.
#### Dot product
A generic dot product implementation that works for slices of any
numeric type.
```Go
// Numeric is a contract that matches any numeric type.
// It would likely be in a contracts package in the standard library.
contract Numeric(T) {
T int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr,
float32, float64,
complex64, complex128
}
func DotProduct(type T Numeric)(s1, s2 []T) T {
if len(s1) != len(s2) {
panic("DotProduct: slices of unequal length")
}
var r T
for i := range s1 {
r += s1[i] * s2[i]
}
return r
}
```
#### Absolute difference
Compute the absolute difference between two numeric values, by using
an `Abs` method.
This uses the same `Numeric` contract defined in the last example.
This example uses more machinery than is appropriate for the simple
case of computing the absolute difference.
It is intended to show how the common part of algorithms can be
factored into code that uses methods, where the exact definition of
the methods can very based on the kind of type being used.
```Go
// NumericAbs matches numeric types with an Abs method.
contract NumericAbs(T) {
Numeric(T)
T Abs() T
}
// AbsDifference computes the absolute value of the difference of
// a and b, where the absolute value is determined by the Abs method.
func AbsDifference(type T NumericAbs)(a, b T) T {
d := a - b
return d.Abs()
}
```
We can define an `Abs` method appropriate for different numeric types.
```Go
// OrderedNumeric matches numeric types that support the < operator.
contract OrderedNumeric(T) {
T int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr,
float32, float64
}
// Complex matches the two complex types, which do not have a < operator.
contract Complex(T) {
T complex64, complex128
}
// OrderedAbs is a helper type that defines an Abs method for
// ordered numeric types.
type OrderedAbs(type T OrderedNumeric) T
func (a OrderedAbs(T)) Abs() OrderedAbs(T) {
if a < 0 {
return -a
}
return a
}
// ComplexAbs is a helper type that defines an Abs method for
// complex types.
type ComplexAbs(type T Complex) T
func (a ComplexAbs(T)) Abs() T {
r := float64(real(a))
i := float64(imag(a))
d := math.Sqrt(r * r + i * i)
return T(complex(d, 0))
}
```
We can then define functions that do the work for the caller by
converting to and from the types we just defined.
```Go
func OrderedAbsDifference(type T OrderedNumeric)(a, b T) T {
return T(AbsDifference(OrderedAbs(T)(a), OrderedAbs(T)(b)))
}
func ComplexAbsDifference(type T Complex)(a, b T) T {
return T(AbsDifference(ComplexAbs(T)(a), ComplexAbs(T)(b)))
}
```
It's worth noting that this design is not powerful enough to write
code like the following:
```Go
// This function is INVALID.
func GeneralAbsDifference(type T Numeric)(a, b T) T {
switch a.(type) {
case int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr,
float32, float64:
return OrderedAbsDifference(a, b) // INVALID
case complex64, complex128:
return ComplexAbsDifference(a, b) // INVALID
}
}
```
The calls to `OrderedAbsDifference` and `ComplexAbsDifference` are
invalid, because not all the types that satisfy the `Numeric` contract
can satisfy the `OrderedNumeric` or `Complex` contracts.
Although the type switch means that this code would conceptually work
at run time, there is no support for writing this code at compile
time.
This another of way of expressing one of the omissions listed above:
this design does not provide for specialization.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/43810-go-pmem.md | ### Proposal: Add support for Persistent Memory in Go
Authors: Jerrin Shaji George, Mohit Verma, Rajesh Venkatasubramanian, Pratap Subrahmanyam
Last updated: January 20, 2021
Discussion at https://golang.org/issue/43810.
## Abstract
Persistent memory is a new memory technology that allows byte-addressability at
DRAM-like access speed and provides disk-like persistence. Operating systems
such as Linux and Windows server already support persistent memory and the
hardware is available commercially in servers. More details on this technology
can be found at [pmem.io](https://pmem.io).
This is a proposal to add native support for programming persistent memory in
Go. A detailed design of our approach to add this support is described in our
2020 USENIX ATC paper [go-pmem](https://www.usenix.org/system/files/atc20-george.pdf).
An implementation of the above design based on Go 1.15 release is available
[here](http://github.com/jerrinsg/go-pmem).
## Background
Persistent Memory is a new type of random-access memory that offers persistence
and byte-level addressability at DRAM-like access speed. Operating systems
provide the capability to mmap this memory to an application's virtual address
space. Applications can then use this mmap'd region just like memory. Durable
data updates made to persistent memory can be retrieved by an application even
after a crash/restart.
Applications using persistent memory benefit in a number of ways. Since durable
data updates made to persistent memory is non-volatile, applications no longer
need to marshal data between DRAM and storage devices. A significant portion
of application code that used to do this heavy-lifting can now be retired.
Another big advantage is a significant reduction in application startup times on
restart. This is because applications no longer need to transform their at-rest
data into an in-memory representation. For example, commercial applications like
SAP HANA report a [12x improvement](https://cloud.google.com/blog/topics/partners/available-first-on-google-cloud-intel-optane-dc-persistent-memory)
in startup times using persistent memory.
This proposal is to provide first-class native support for Persistent memory in
Go. Our design modifies Go 1.15 to introduce a garbage collected persistent
heap. We also instrument the Go compiler to introduce semantics that enables
transactional updates to persistent-memory datastructures. We call our modified
Go suite as *go-pmem*. A Redis database developed with using go-pmem offers more
than 5x throughput compared to Redis running on NVMe SSD.
## Proposal
We propose adding native support for programming persistent memory in Go. This
requires making the following features available in Go:
1. Support persistent memory allocations
2. Garbage collection of persistent memory heap objects
3. Support modifying persistent memory datastructures in a crash-consistent
manner
4. Enable applications to recover following a crash/restart
5. Provide applications a mechanism to retrieve back durably stored data in
persistent memory
To support these features, we extended the Go runtime and added a new SSA pass
in our implementation as discussed below.
## Rationale
There exists libraries such as Intel [PMDK](https://pmem.io/pmdk/) that provides
C and C++ developers support for persistent memory programming. Other
programming languages such as Java and Python are exploring ways to enable
efficient access to persistent memory. E.g.,
* Java - https://bugs.openjdk.java.net/browse/JDK-8207851
* Python - https://pynvm.readthedocs.io/en/v0.3.1/
But no language provide a native persistent memory programming support. We
believe this is an impediment to widespread adoption to this technology. This
proposal attempts to remedy this problem by making Go the first language to
completely support persistent memory.
### Why language change?
The C libraries expose a programming model significantly different (and complex)
than existing programming models. In particular, memory management becomes
difficult with libraries. A missed "free" call can lead to memory leaks and
persistent memory leaks become permanent and do not vanish after application
restarts. In a language with a managed runtime such as Go, providing visibility
to its garbage collector into a memory region managed by a library becomes very
difficult.
Identifying and instrumenting stores to persistent memory data to provide
transactional semantics also requires programming language change.
In our implementation experience, the Go runtime and compiler was easily
amenable to add these capabilities.
## Compatibility
Our current changes preserve the Go 1.x future compatibility promise. It does
not break compatibility for programs not using any persistent memory features
exposed by go-pmem.
Having said that, we acknowledge a few downsides with our current design:
1. We store memory allocator metadata in persistent memory. When a program
restarts, we use these metadata to recreate the program state of the memory
allocator and garbage collector. As with any persistent data, we need to
maintain the data layout of this metadata. Any changes to Go memory allocator's
datastructure layout can break backward compatibility with our persistent
metadata. This can be fixed by developing an offline tool which can do this
data format conversion or by embedding this capability in go-pmem.
2. We currently add three new Go keywords : pnew, pmake and txn. pnew, pmake are
persistent memory allocation APIs and txn is used to demarcate transactional
updates to data structures. We have explored a few ways to avoid making these
language changes as described below.
a) pnew/pmake
The availability of generics support in a future version of Go can help us avoid
introducing these memory allocation functions. They can instead be functions
exported by a Go package.
```
func Pnew[T any](_ T) *T {
ptr := runtime.pnew(T)
return ptr
}
func Pmake[T any](_ T, len, cap int) []T {
slc := runtime.pmake([]T, len, cap)
return slc
}
```
`runtime.pnew` and `runtime.pmake` would be special functions that can take a
type as arguments. They then behave very similar to the `new()` and `make()`
APIs but allocate objects in the persistent memory heap.
b) txn
An alternative approach would be to define a new Go pragma that identifies a
transactional block of code. It could have the following syntax:
```
//go:transactional
{
// transactional data updates
}
```
Another alternative approach can be to use closures with the help of a few
runtime and compiler changes. For example, something like this can work:
```
runtime.Txn() foo()
```
Internally, this would be similar to how Go compiler instruments stores when
mrace/msan flag is passed while compiling. In this case, writes inside
function foo() will be instrumented and foo() will be executed transactionally.
See this playground [code](https://go2goplay.golang.org/p/WRUTZ9dr5W3) for a
complete code listing with our proposed alternatives.
## Implementation
Our implementation is based on a fork of Go source code version Go 1.15. Our
implementation adds three new keywords to Go: pnew, pmake and txn. pnew and
pmake are persistent memory allocation APIs and txn is used to demarcate a
block of transaction data update to persistent memory.
1. pnew - `func pnew(Type) *Type`
Just like `new`, `pnew` creates a zero-value object of the `Type` argument in
persistent memory and returns a pointer to this object.
2. pmake - `func pmake(t Type, size ...IntType) Type`
The `pmake` API is used to create a slice in persistent memory. The semantics of
`pmake` is exactly the same as `make` in Go. We don't yet support creating maps
and channels in persistent memory.
3. txn
```
txn() {
// transaction data updates
}
```
Our code changes to Go can be broken down into two parts - runtime changes and
compiler-SSA changes.
### Runtime changes
We extend the Go runtime to support persistent memory allocations. The garbage
collector now works across both the persistent and volatile heaps. The `mspan`
datastructure has one additional data member `memtype` to distinguish between
persistent and volatile spans. We also extend various memory allocator
datastructures in mcache, mcentral, and mheap to store metadata related to
persistent memory and volatile memory separately. The garbage collector now
understands these different span types and puts back garbage collected spans
in the appropriate datastructures depending on its `memtype`.
Persistent memory is managed in arenas that are a multiple of 64MB. Each
persistent memory arena has in its header section certain metadata that
facilitates heap recovery in case of application crash or restart. Two kinds of
metadata are stored:
* GC heap type bits - Garbage collector heap type bits set for any object in
this arena is copied as such to the metadata section to be restored on a
subsequent run of this application
* Span table - Captures metadata about each span in this arena that lets the
heap recovery code recreates these spans in the next run.
We added the following APIs in the runtime package to manage persistent memory:
1 `func PmemInit(fname string) (unsafe.Pointer, error)`
Used to initialize persistent memory. It takes the path to a persistent memory
file as input. It returns the application root pointer and an error value.
2 `func SetRoot(addr unsafe.Pointer) (err Error)`
Used to set the application root pointer. All application data in persistent
memory hangs off this root pointer.
3 `func GetRoot() (addr unsafe.Pointer)`
Returns the root pointer set using SetRoot().
4 `func InPmem(addr unsafe.Pointer) bool`
Returns whether `addr` points to data in persistent memory or not.
5. `func PersistRange(addr unsafe.Pointer, len uintptr)`
Flushes all the cachelines in the address range (addr, addr+len) to ensure
any data updates to this memory range is persistently stored.
### Compiler-SSA changes
1. We change the parser to recognize three new language tokens - `pnew`,
`pmake`, and `txn`.
2. We add a new SSA pass to instrument all stores to persistent memory. Because
data in persistent memory survives crashes, updates to data in persistent memory
have to be transactional.
3. The Go AST and SSA was modified so that users can now demarcate a block of
Go code as transactional by encapsulating them within a `txn()` block.
- To do this, we add a new keyword to Go called `txn`.
- A new SSA pass would then look for stores(`OpStore`/`OpMove`/`OpZero`) to
persistent memory locations within this `txn()` block, and store the old
data at this location in an [undo Log](https://github.com/vmware/go-pmem-transaction/blob/master/transaction/undoTx.go).
This would be done before making the actual memory update.
### go-pmem packages
We have developed two packages that makes it easier to use go-pmem to write
persistent memory applications.
1. [pmem](https://github.com/vmware/go-pmem-transaction/tree/master/pmem) package
It provides a simple `Init(fname string) bool` API that applications can use to
initialize persistent memory. It returns if this is a first-time initialization
or not. In case it is not the first-time initialization, any incomplete
transactions are reverted as well.
pmem package also provides named objects where names can be associated with
objects in persistent memory. Users can create and retrieve these objects using
string names.
2. [transaction](https://github.com/vmware/go-pmem-transaction/tree/master/transaction) package
Transaction package provides the implementation of undo logging that is used
by go-pmem to enable crash-consistent data updates.
### Example Code
Below is a simple linked list application written using go-pmem
```
// A simple linked list application. On the first invocation, it creates a
// persistent memory pointer named "dbRoot" which holds pointers to the first
// and last element in the linked list. On each run, a new node is added to
// the linked list and all contents of the list are printed.
package main
import (
"github.com/vmware/go-pmem-transaction/pmem"
"github.com/vmware/go-pmem-transaction/transaction"
)
const (
// Used to identify a successful initialization of the root object
magic = 0x1B2E8BFF7BFBD154
)
// Structure of each node in the linked list
type entry struct {
id int
next *entry
}
// The root object that stores pointers to the elements in the linked list
type root struct {
magic int
head *entry
tail *entry
}
// A function that populates the contents of the root object transactionally
func populateRoot(rptr *root) {
txn() {
rptr.magic = magic
rptr.head = nil
rptr.tail = nil
}
}
// Adds a node to the linked list and updates the tail (and head if empty)
func addNode(rptr *root) {
entry := pnew(entry)
txn() {
entry.id = rand.Intn(100)
if rptr.head == nil {
rptr.head = entry
} else {
rptr.tail.next = entry
}
rptr.tail = entry
}
}
func main() {
firstInit := pmem.Init("database")
var rptr *root
if firstInit {
// Create a new named object called dbRoot and point it to rptr
rptr = (*root)(pmem.New("dbRoot", rptr))
populateRoot(rptr)
} else {
// Retrieve the named object dbRoot
rptr = (*root)(pmem.Get("dbRoot", rptr))
if rptr.magic != magic {
// An object named dbRoot exists, but its initialization did not
// complete previously.
populateRoot(rptr)
}
}
addNode(rptr) // Add a new node in the linked list
}
```
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/go-generate.md | # Go generate: A Proposal
Author: Rob Pike
Accepted in the Go 1.4 release.
## Introduction
The go build command automates the construction of Go programs but
sometimes preliminary processing is required, processing that go build
does not support.
Motivating examples include:
- yacc: generating .go files from yacc grammar (.y) files
- protobufs: generating .pb.go files from protocol buffer definition (.proto) files
- Unicode: generating tables from UnicodeData.txt
- HTML: embedding .html files into Go source code
- bindata: translating binary files such as JPEGs into byte arrays in Go source
There are other processing steps one can imagine:
- string methods: generating String() string methods for types used as enumerated constants
- macros: generating customized implementations given generalized packages, such as sort.Ints from ints
This proposal offers a design for smooth automation of such processing.
## Non-goal
It is not a goal of this proposal to build a generalized build system
like the Unix make(1) utility.
We deliberately avoid doing any dependency analysis.
The tool does what is asked of it, nothing more.
It is hoped, however, that it may replace many existing uses of
make(1) in the Go repo at least.
## Design
There are two basic elements, a new subcommand for the go command,
called go generate, and directives inside Go source files that control
generation.
When go generate runs, it scans Go source files looking for those
directives, and for each one executes a generator that typically
creates a new Go source file.
The go generate tool also sets the build tag "generate" so that files
may be examined by go generate but ignored during build.
The usage is:
```
go generate [-run regexp] [file.go...|packagePath...]
```
(Plus the usual `-x`, `-n`, `-v` and `-tags` options.)
If packages are named, each Go source file in each package is scanned
for generator directives, and for each directive, the specified
generator is run; if files are named, they must be Go source files and
generation happens only for directives in those files.
Given no arguments, generator processing is applied to the Go source
files in the current directory.
The `-run` flag takes a regular expression, analogous to that of the
go test subcommand, that restricts generation to those directives
whose command (see below) matches the regular expression.
Generator directives may appear anywhere in the Go source file and are
processed sequentially (no parallelism) in source order as presented
to the tool.
Each directive is a // comment beginning a line, with syntax
```
//go:generate command arg...
```
where command is the generator (such as `yacc`) to be run,
corresponding to an executable file that can be run locally; it must
either be in the shell path (`gofmt`) or fully qualified
(`/usr/you/bin/mytool`) and is run in the package directory.
The arguments are space-separated tokens (or double-quoted strings)
passed to the generator as individual arguments when it is run.
Shell-like variable expansion is available for any environment
variables such as `$HOME`.
Also, the special variable `$GOFILE` refers to the name of the file
containing the directive.
(We may need other special variables such as `$GOPACKAGE`.
When the generator is run, these are also provided in the shell
environment.)
No other special processing, such as globbing, is provided.
No further generators are run if any generator returns an error exit
status.
As an example, say we have a package `my/own/gopher` that includes a
yacc grammar in file `gopher.y`.
Inside `main.go` (not `gopher.y`) we place the directive
```
//go:generate yacc -o gopher.go gopher.y
```
(More about what `yacc` means in the next section.)
Whenever we need to update the generated file, we give the shell
command,
```
% go generate my/own/gopher
```
or, if we are already in the source directory,
```
% go generate
```
If we want to make sure that only the yacc generator is run, we
execute
```
% go generate -run yacc
```
If we have fixed a bug in yacc and want to update all yacc-generated
files in our tree, we can run
```
% go generate -run yacc all
```
The typical cycle for a package author developing software that uses
`go generate` is
```
% edit …
% go generate
% go test
```
and once things are settled, the author commits the generated files to
the source repository, so that they are available to clients that use
go get:
```
% git add *.go
% git commit
```
## Commands
The yacc program is of course not the standard version, but is
accessed from the command line by
```
go tool yacc args...
```
To make it easy to use tools like yacc that are not installed in
$PATH, have complex access methods, or benefit from extra flags or
other wrapping, there is a special directive that defines a shorthand
for a command.
It is a `go:generate` directive followed by the keyword/flag
`-command` and which generator it defines; the rest of the line is
substituted for the command name when the generator is run.
Thus to define `yacc` as a generator command we access normally by
running `go tool yacc`, we first write the directive
```
//go:generate -command yacc go tool yacc
```
and then all other generator directives using `yacc` that follow in
that file (only) can be written as above:
```
//go:generate yacc -o gopher.go gopher.y
```
which will be translated to
```
go tool yacc -o gopher.go gopher.y
```
when run.
## Discussion
This design is unusual but is driven by several motivating principles.
First, `go generate` is intended[^1] to be run by the author of a
package, not the client of it.
The author of the package generates the required Go files and includes
them in the package; the client does a regular `go get` or `go
build`.
Generation through `go generate` is not part of the build, just a tool
for package authors.
This avoids complicating the dependency analysis done by Go build.
[^1]: One can imagine scenarios where the author wishes the client to
run the generator, but in such cases the author must guarantee that
the client has the generator available.
Regardless, `go get` will not automate the running of the processor,
so further installation instructions will need to be provided by the
author.
Second, `go build` should never cause generation to happen
automatically by the client of the package. Generators should run only
when explicitly requested.
Third, the author of the package should have great freedom in what
generator to use (that is a key goal of the proposal), but the client
might not have that processor available.
As a simple example, if it is a shell script, it will not run on
Windows.
It is important that automated generation not break clients but be
invisible to them, which is another reason it should be run only by
the author of the package.
Finally, it must fit well with the existing go command, which means it
applies only to Go source files and packages.
This is why the directives are in Go files but not, for example, in
the .y file holding a yacc grammar.
## Examples
Here are some hypothetical worked examples.
There are countless more possibilities.
### String methods
We wish to generate a String method for a named constant type.
We write a tool, say `strmeth`, that reads a definition for a single
constant type and values and prints a complete Go source file
containing a method definition for that type.
In our Go source file, `main.go`, we decorate each constant
declaration like this (with some blank lines interposed so the
generator directive does not appear in the doc comment):
```Go
//go:generate strmeth Day -o day_string.go $GOFILE
// Day represents the day of the week
type Day int
const (
Sunday Day = iota
Monday
...
)
```
The `strmeth` generator parses the Go source to find the definition of
the `Day` type and its constants, and writes out a `String() string`
method for that type.
For the user, generation of the string method is trivial: just run `go
generate`.
### Yacc
As outlined above, we define a custom command
```
//go:generate -command yacc go tool yacc
```
and then anywhere in main.go (say) we write
```
//go:generate yacc -o foo.go foo.y
```
### Protocol buffers
The process is the same as with yacc.
Inside `main.go`, we write, for each protocol buffer file we have, a
line like
```
//go:generate protoc -go_out=. file.proto
```
Because of the way protoc works, we could generate multiple proto
definitions into a single `.pb.go` file like this:
```
//go:generate protoc -go_out=. file1.proto file2.proto
```
Since no globbing is provided, one cannot say `*.proto`, but this is
intentional, for simplicity and clarity of dependency.
Caveat: The protoc program must be run at the root of the source tree;
we would need to provide a `-cd` option to it or wrap it somehow.
### Binary data
A tool that converts binary files into byte arrays that can be
compiled into Go binaries would work similarly.
Again, in the Go source we write something like
```
//go:generate bindata -o jpegs.go pic1.jpg pic2.jpg pic3.jpg
```
This is also demonstrates another reason the annotations are in Go
source: there is no easy way to inject them into binary files.
### Sort
One could imagine a variant sort implementation that allows one to
specify concrete types that have custom sorters, just by automatic
rewriting of macro-like sort definition.
To do this, we write a `sort.go` file that contains a complete
implementation of sort on an explicit but undefined type spelled, say,
`TYPE`.
In that file we provide a build tag so it is never compiled (`TYPE` is
not defined, so it won't compile) but is processed by `go generate`:
```
// +build generate
```
Then we write an generator directive for each type for which we want a
custom sort:
```
//go:generate rename TYPE=int
//go:generate rename TYPE=strings
```
or perhaps
```
//go:generate rename TYPE=int TYPE=strings
```
The rename processor would be a simple wrapping of `gofmt -r`, perhaps
written as a shell script.
There are many more possibilities, and it is a goal of this proposal
to encourage experimentation with pre-build-time code generation.
|
design | /home/linuxreitt/Michinereitt/Tuning/Workshop_Scripts/hf-codegen/data/golang_public_repos/proposal/design/42372-wasmexport.md | # Proposal: go:wasmexport directive
Author: Francesco Guardiani
Last updated: 2020-12-17
Discussion at https://golang.org/issue/42372.
## Abstract
The goal of this proposal is to add a new compiler directive `go:wasmexport` to
export Go functions when compiling to WebAssembly.
This directive is similar to the `go:wasmimport` directive proposed in
https://golang.org/issue/38248.
## Background
Wasm is a technology that allows users to execute instructions inside virtual
machines sandboxed by default, that is the Wasm user by default cannot interact
with the external world and viceversa.
Wasm can be used in very different contexts and, recently, it's becoming more
and more used as a technology to extend, at runtime, software running outside
browsers.
In order to do that, the extensible software provides to the "extension
developers" ad-hoc libraries to develop Wasm modules.
Thanks to an ABI well-defined, the extensible software will be able to access to
the compiled Wasm module and execute the extension logic.
Some systems that adopt this extension mechanism include
[Istio](https://istio.io/latest/docs/concepts/wasm/) and
[OPA](https://www.openpolicyagent.org/docs/v0.21.1/wasm/).
In order to use Wasm modules in such environments, the developer should be able
to define which Go functions can be accessible from the outside and what host
functions can be accessible from within the Wasm module.
While the latter need is already covered and implemented by the issue
https://golang.org/issue/38248, this proposal tries to address the former need.
### An example extension module
As a complete example, assume there is a system that triggers some signals and
that can be extended to develop applications based on these signals.
The extension module is intended to be used just as "signal handler", maybe with
some lifecycle methods (e.g. start and stop) to prepare the environment and to
teardown it.
The extension module, from a host perspective, is an actor that needs to be
invoked on every this use case the module
When the host wants to start using the module, the `start` export is invoked.
`start` in its logic spawns, using the `go` instruction, a goroutine that loops
on a global channel, like:
```go
for event := range eventsch {
// Process events
}
```
Then each export eventually push messages in this `eventsch`:
```go
eventsch <- value
```
When `process_a` export is invoked, the value will be pushed inside the
`eventsch` and the goroutine spawned by `start` will catch it.
In other words, the interaction between host and module looks like this:
![](https://user-images.githubusercontent.com/6706544/98349379-34159400-201a-11eb-8417-5d728ce141ca.png)
## Proposal
### Interface
A new directive will allow users to define what functions should be exported in
the Wasm module produced by the Go compiler. Given this code:
```go
//go:wasmexport hello_world
func HelloWorld() {
println("Hello world!")
}
```
The compiler will produce this Wasm module:
```shell
% wasm-nm -e sample/main.wasm
e run
e resume
e getsp
e hello_world
```
Note that the first 3 exports are the default hardcoded exports of Go ABI.
### Execution
Every time the module executor (also called host) will invoke the `hello_world`
export, a new goroutine is spawned and immediately executed to run the
instructions in `HelloWorld`.
This wakes up the goroutine scheduler, which will try to run all the goroutines
up to the point when they are all parked.
When all goroutines are parked, the `hello_world` export will complete its
execution and return the return value of `HelloWorld` back to the host.
### Types
The exported function can contain in its signature (parameters and return value)
only Wasm supported types.
## Rationale
## Relation with `syscall/js.FuncOf`
The functionality of defining exports already exists in Go, through the Go JS
ABI. The cons of `syscall/js.FuncOf` are that is not idiomatic for Wasm users
and assumes that the host is a Javascript environment.
Because of the issues described above, It's complicated to support, from the
extensible system perspective, Wasm Go modules, because it requires "faking" a
Javascript environment to integrate with the Go ABI.
### Relation with Wasm threads proposal
This approach doesn't mandate any particular interaction style between host and
module, nor the underlying threading system the host uses to execute the module.
In fact, as of today, every Wasm module just assumes the underlying execution
environment, that is the virtual machine that executes Wasm instructions, as
sequential. There is no notion of parallelism.
There is a proposal in the Wasm community, called
[Wasm threads proposal](https://github.com/webassembly/threads), that allows
Wasm virtual machines to be able to process instructions in parallel.
The Go project could, at some point, evolve to support the Wasm Threads
proposal, exposing an interface to execute the goroutine scheduler on multiple
threads.
This might affect or not (depending on the future decisions) the execution model
of the export, but without effectively changing the semantics from the user
point of view, nor the interface described above.
For example, assume Go implements the goroutine scheduler on multiple Wasm
threads, from the user perspective there is no semantic difference if the export
function `hello_world` returns after all goroutines are parked or if it just
returns as soon as `HelloWorld` completes.
### Relation with Wasm interface types proposal
The
[Wasm interface types proposal](https://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.md)
aims to provide higher level typing in Wasm modules for imports and exports.
Thanks to the _Wasm interface types_, we might be able in future to allow users
to extend the set of supported types in the imports and exports signatures.
## Compatibility
Like https://golang.org/issue/38248, the `go:wasmexport` directive will not be
covered by Go's compatibility promise as long as the Wasm architecture itself is
not considered stable.
## Implementation
The implementation involves:
1. Implement the `go:wasmexport` directive in the compiler and test the proper
compilation to a Wasm module including the export
2. Implement the execution model of `go:wasmexport`
3. (Optional) Remove the hardcoded exports and convert them to use the
`go:wasmexport` directive
The step (1) should look very similar to the work already done for the
`go:wasmimport` directive, available
[here](https://go-review.googlesource.com/c/go/+/252828/).
Step (2) will mostly require refactoring the runtime code already available to
implement [`syscall/js.FuncOf`](https://golang.org/pkg/syscall/js/#FuncOf) (e.g.
`runtime/rt0_js_wasm.s`), in order to generalize it to any export (and not just
the built-in ones).
Step (3) might be required or not, depending on the outcome of step (2), in
order to keep a correct implementation of the Go JS ABI, without changing its
behaviours.
## Open issues (if applicable)
- Should we allow users to control whether to execute all goroutines up to when
they're parked or to return immediately after the exported Go function (e.g.
`helloWorld`) completes?
|