tag
dict
content
listlengths
1
139
{ "category": "Provisioning", "file_name": ".md", "project_name": "Goldilocks", "subcategory": "Security & Compliance" }
[ { "data": "This utility interacts with the Fairwinds BIF-Server to find base images and report on their vulnerabilities. When using a container scanning tool to identify known vulnerabilities (CVEs, or common vulnerabilities and exposures), it can be difficult to understand where the vulnerabilities exist in the container, and how to mitigate them. Often, the simplest and most efficient mitigation is to update the \"base image\" - or the image used in the FROM statement in your container definition. BIF allows you to understand the impact of updating the base image of your container will have: Download the latest binary from the releases page (opens new window) First, you must request an API token to use with the base image finder. You can do this via the cli: ``` bif request-token ``` You will receive your token via email. To automatically have BIF use this token, export it as INSIGHTSOSSTOKEN in your environment. ``` bif find --image-layers $(skopeo inspect docker://us-docker.pkg.dev/fairwinds-ops/oss/polaris:7.0.0 | jq .Layers[] -rc) Input: [sha256:2408cc74d12b6cd092bb8b516ba7d5e290f485d3eb9672efc00f0583730179e8] BASE IMAGE | LAST SCAN | CVE | SEVERITY | CVSS | FIXED IN -++-+-++-- alpine:3.16.0 | 2023-02-28 | CVE-2022-2097 | MEDIUM | 5.30 | 3.17.3, 3.16.5 | | CVE-2022-30065 | HIGH | 7.80 | 3.17.3, 3.16.5 | | CVE-2022-37434 | CRITICAL | 9.80 | 3.17.3, 3.16.5 | | CVE-2022-4304 | MEDIUM | 5.90 | 3.17.3, 3.16.5 | | CVE-2022-4450 | HIGH | 7.50 | 3.17.3, 3.16.5 | | CVE-2023-0215 | HIGH | 7.50 | 3.17.3, 3.16.5 | | CVE-2023-0286 | HIGH | 7.40 | 3.17.3, 3.16.5 ``` ``` bif find --image us-docker.pkg.dev/fairwinds-ops/oss/polaris:7.0.0 Input: us-docker.pkg.dev/fairwinds-ops/oss/polaris 7.0.0 BASE IMAGE | LAST SCAN | CVE | SEVERITY | CVSS | FIXED IN -++-+-++-- alpine:3.16.0 | 2023-02-28 | CVE-2022-2097 | MEDIUM | 5.30 | 3.17.3, 3.16.5 | | CVE-2022-30065 | HIGH | 7.80 | 3.17.3, 3.16.5 | | CVE-2022-37434 | CRITICAL | 9.80 | 3.17.3, 3.16.5 | | CVE-2022-4304 | MEDIUM | 5.90 | 3.17.3, 3.16.5 | | CVE-2022-4450 | HIGH | 7.50 | 3.17.3, 3.16.5 | | CVE-2023-0215 | HIGH | 7.50 | 3.17.3, 3.16.5 | | CVE-2023-0286 | HIGH | 7.40 | 3.17.3, 3.16.5 ``` If you run into issues, you can try adding debug logging with the --debug flag. If you have further issues, please reach out in the community slack or file a github issue." } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "GitGuardian", "subcategory": "Security & Compliance" }
[ { "data": "Download OpenAPI specification:Download Whether you want to build a complete integration with your software development workflow or simply want to test GitGuardian's policy break detection on any text content, you can use our API. ``` 2020-03-16T04:46:00+00:00 # for date-time ``` GitGuardian supported wrappers: GitGuardian provides you with GitGuardian Shield, a CLI application that uses the GitGuardian API through py-gitguardian to scan your files and detect potential secrets or issues in your code. This CLI application can be used in many CIs (such as GitHub Actions, GitLab Pipelines, CircleCI,...) or as a pre-commit or pre-receive hook. The GitGuardian API uses API keys to authenticate requests. You need to create an account before getting started in order to get an API key. Your API key can be created and revoked from the API section of your dashboard. Your API key must kept private and should neither be embedded directly in the code nor versioned in Git. (Please do not push GitGuardian's API keys to public GitHub repositories ^^). Beware your API keys can expire and can be revoked. Use /v1/health to check the validity of your token if needed. ``` curl -H \"Authorization: Token ${TOKEN}\" \\ https://api.gitguardian.com/v1/health ``` Manage API tokens. Current token details Invalid API key Token revocation confirmation Invalid API key List all the tokens in the workspace, some filters are available and described below. | 0 | 1 | |:--|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | status | string Enum: active expired revoked Status of the token. | | memberid | integer Example: memberid=1Filter by member id. | | creatorid | integer Example: creatorid=1Filter by creator id. | | scopes | string Enum: scan incidents:read incidents:write incidents:share members:read members:write teams:read teams:write auditlogs:read honeytokens:read honeytokens:write apitokens:read apitokens:write Example: scopes=incidents:read,apitokens:readTokens with one of the following scopes. | | search | string Search tokens based on their name. | | ordering | string Enum: createdat -createdat lastusedat -lastusedat expireat -expireat revokedat -revokedat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | Pagination cursor. Number of items to list per page. Status of the token. Filter by member id. Filter by creator id. Tokens with one of the following scopes. Search tokens based on their name. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. API tokens list Invalid data Invalid API key | 0 | 1 | |:--|:| | token_idrequired | string Example: 5ddaad0c-5a0c-4674-beb5-1cd198d13360Id of the token. | Id of the token. Current token details Invalid API key API token not found | 0 | 1 | |:--|:| | token_idrequired | string Example: 5ddaad0c-5a0c-4674-beb5-1cd198d13360Id of the token. | Id of the token. Token revocation confirmation Invalid API key API token not found Create a short lived JWT for authentication to specific GitGuardian services, including HasMySecretLeaked. | 0 | 1 | |:--|:-| | audiencerequired | string Audience of the JWT. | | audience_type | string Type of audience. | Audience of the JWT. Type of audience. Created JWT Invalid data Invalid API key Manage incidents found by post-receive hooks on your GitGuardian Dashboard. List secret incidents detected by the GitGuardian dashboard. Occurrences are not returned in this route. | 0 | 1 | |:-|:-| | cursor | string Pagination" }, { "data": "| | page | integer >= 0 Deprecated Default: 1 Page number. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | datebefore | string <datetime> Example: datebefore=2019-08-30T14:15:22ZEntries found before this date. | | dateafter | string <datetime> Example: dateafter=2019-08-22T14:15:22ZEntries found after this date. | | assigneeemail | string Example: assigneeemail=eric@gitguardian.comIncidents assigned to this email. | | assigneeid | integer Example: assigneeid=4932Incidents assigned to this user id. | | status | string Enum: IGNORED TRIGGERED ASSIGNED RESOLVED Incidents with the following status. | | severity | string Enum: critical high medium low info unknown Filter incidents by severity. | | validity | string Enum: valid invalid failedtocheck no_checker unknown Secrets with the following validity. | | tags | string Enum: DEFAULTBRANCH FROMHISTORICALSCAN CHECKRUNSKIPFALSEPOSITIVE CHECKRUNSKIPLOWRISK CHECKRUNSKIPTESTCRED PUBLIC PUBLICLYEXPOSED PUBLICLYLEAKED REGRESSION SENSITIVEFILE TESTFILE NONE Example: tags=FROMHISTORICALSCAN,SENSITIVEFILEIncidents with one of the following tags. Use NONE if you want to filter incidents with no tags. | | ordering | string Enum: date -date resolvedat -resolvedat ignoredat -ignoredat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | detectorgroupname | string Example: detectorgroupname=slackbot_tokenIncidents belonging to the specified detector group. | | ignorerid | integer Example: ignorerid=4932Incidents ignored by this user id. | | ignorerapitokenid | string <uuid> Example: ignorerapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents ignored by this API token id. | | resolverid | integer Example: resolverid=4932Incidents resolved by this user id. | | resolverapitokenid | string <uuid> Example: resolverapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents resolved by this API token id. | Pagination cursor. Page number. Number of items to list per page. Entries found before this date. Entries found after this date. Incidents assigned to this email. Incidents assigned to this user id. Incidents with the following status. Filter incidents by severity. Secrets with the following validity. Incidents with one of the following tags. Use NONE if you want to filter incidents with no tags. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Incidents belonging to the specified detector group. Incidents ignored by this user id. Incidents ignored by this API token id. Incidents resolved by this user id. Incidents resolved by this API token id. Secret Incident List Invalid data Invalid API key API under maintenance Retrieve secret incident detected by the GitGuardian dashboard with its occurrences. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:| | with_occurrences | integer [ 0 .. 100 ] Default: 20 Retrieve a number of occurrences of this incident. | Retrieve a number of occurrences of this incident. Secret Incident Details Invalid data Invalid API key API under maintenance Update a secret incident | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:|:| | severity | string Enum: critical high medium low info unknown | Secret Incident Details without occurrences. Invalid data Invalid API key API under maintenance Retrieve where a secret has been publicly" }, { "data": "| 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve List of the leaks of the incident Invalid API key Incident not found API under maintenance Assign secret incident detected by the GitGuardian dashboard to a workspace member by email. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:-|:-| | email | string email of the member to assign. This parameter is mutually exclusive with member_id. | | member_id | number id of the member to assign. This parameter is mutually exclusive with email. | email of the member to assign. This parameter is mutually exclusive with member_id. id of the member to assign. This parameter is mutually exclusive with email. Secret Incident Details without occurrences. Invalid data Invalid API key Data conflict API under maintenance Unassign secret incident from a workspace member by email. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve Secret Incident Details without occurrences. Invalid data Invalid API key Data conflict API under maintenance Resolve a secret incident detected by the GitGuardian dashboard. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:--| | secret_revokedrequired | boolean | Secret Incident Details without occurrences. Invalid data Invalid API key Data conflict API under maintenance Ignore a secret incident detected by the GitGuardian dashboard. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:-|:--| | ignorereasonrequired | string Enum: testcredential falsepositive lowrisk | Secret Incident Details without occurrences. Invalid data Invalid API key Data conflict API under maintenance Unresolve or unignore a secret incident detected by the GitGuardian dashboard. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve Secret Incident Details without occurrences. Invalid data Invalid API key Data conflict API under maintenance Share a secret incident by creating a public link. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:--| | auto_healing | boolean Default: false Allow the developer to resolve or ignore through the share link | | feedback_collection | boolean Default: true Allow the developer to submit their feedback through the share link | | lifespan | integer >= 0 Default: 0 Lifespan, in hours, of the share link. If 0 or unset, a default value will be applied based on the workspace settings. | Allow the developer to resolve or ignore through the share link Allow the developer to submit their feedback through the share link Lifespan, in hours, of the share link. If 0 or unset, a default value will be applied based on the workspace settings. Incident Sharing Details Invalid data Invalid API key Data conflict API under maintenance Unshare a secret incident by revoking its public link. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve The public link was revoked successfully. Invalid data Invalid API key Data conflict API under maintenance Grant a user, an existing invitee or a team access to a secret" }, { "data": "| 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:-| | email | string Email address of a user or invitee. This parameter is mutually exclusive with memberid, invitationid and team_id. | | memberid | number Id of a member. This parameter is mutually exclusive with email, invitationid and team_id. | | invitationid | number Id of an invitation. This parameter is mutually exclusive with email, memberid and team_id. | | teamid | number Id of a team, except for the global team. This parameter is mutually exclusive with email, memberid and invitation_id. | | incidentpermission | string Enum: canview canedit fullaccess | Email address of a user or invitee. This parameter is mutually exclusive with memberid, invitationid and team_id. Id of a member. This parameter is mutually exclusive with email, invitationid and teamid. Id of an invitation. This parameter is mutually exclusive with email, memberid and teamid. Id of a team, except for the global team. This parameter is mutually exclusive with email, memberid and invitationid. The access was granted successfully. Invalid data Invalid API key Permission denied Data conflict API under maintenance Revoke access of a user, an existing invitee or a team to a secret incident. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:-| | email | string Email address of a user or invitee. This parameter is mutually exclusive with memberid, invitationid and team_id. | | memberid | number Id of a member. This parameter is mutually exclusive with email, invitationid and team_id. | | invitationid | number Id of an invitation. This parameter is mutually exclusive with email, memberid and team_id. | | teamid | number Id of a team, except for the global team. This parameter is mutually exclusive with email, memberid and invitation_id. | Email address of a user or invitee. This parameter is mutually exclusive with memberid, invitationid and team_id. Id of a member. This parameter is mutually exclusive with email, invitationid and teamid. Id of an invitation. This parameter is mutually exclusive with email, memberid and teamid. Id of a team, except for the global team. This parameter is mutually exclusive with email, memberid and invitationid. The access was revoked successfully. Invalid data Invalid API key Permission denied Data conflict API under maintenance List all the members having access to a secret incident. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:--| | cursor | string Pagination cursor. | | page | integer >= 0 Deprecated Default: 1 Page number. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | memberid | number Example: memberid=1234filter on a specific member id. | | incidentpermission | string Enum: canview canedit fullaccess Example: incidentpermission=canview | | role | string Deprecated Enum: owner manager member restricted Filter members based on their role. | | search | string Deprecated Search members based on their name or email. | Pagination cursor. Page number. Number of items to list per page. filter on a specific member id. Filter members based on their role. Search members based on their name or" }, { "data": "List of members Invalid data Invalid API key Secret Incident not found API under maintenance List all the teams having access to a secret incident. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:| | cursor | string Pagination cursor. | | teamid | number Example: teamid=1234filter on a specific team id. | | incidentpermission | string Enum: canview canedit fullaccess Example: incidentpermission=canview | Pagination cursor. filter on a specific team id. List of team accesses Invalid data Invalid API key Permission denied Secret Incident not found API under maintenance List all the invitations having access to a Secret Incident. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:--|:-| | cursor | string Pagination cursor. | | invitationid | number Example: invitationid=1234filter on a specific invitation id. | | incidentpermission | string Enum: canview canedit fullaccess Example: incidentpermission=canviewfilter accesses with a specific permission. | Pagination cursor. filter on a specific invitation id. filter accesses with a specific permission. List of invitation accesses Invalid data Invalid API key Permission denied Secret Incident not found API under maintenance List secret incidents linked to a source. Occurrences are not returned in this route. | 0 | 1 | |:|:--| | source_idrequired | integer Example: 5523The id of the source to filter on. | The id of the source to filter on. | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | datebefore | string <datetime> Example: datebefore=2019-08-30T14:15:22ZEntries found before this date. | | dateafter | string <datetime> Example: dateafter=2019-08-22T14:15:22ZEntries found after this date. | | assigneeemail | string Example: assigneeemail=eric@gitguardian.comIncidents assigned to this email. | | assigneeid | integer Example: assigneeid=4932Incidents assigned to this user id. | | status | string Enum: IGNORED TRIGGERED ASSIGNED RESOLVED Incidents with the following status. | | severity | string Enum: critical high medium low info unknown Filter incidents by severity. | | validity | string Enum: valid invalid failedtocheck no_checker unknown Secrets with the following validity. | | tags | string Enum: DEFAULTBRANCH FROMHISTORICALSCAN CHECKRUNSKIPFALSEPOSITIVE CHECKRUNSKIPLOWRISK CHECKRUNSKIPTESTCRED PUBLIC PUBLICLYEXPOSED PUBLICLYLEAKED REGRESSION SENSITIVEFILE TESTFILE NONE Example: tags=FROMHISTORICALSCAN,SENSITIVEFILEIncidents with one of the following tags. Use NONE if you want to filter incidents with no tags. | | ordering | string Enum: date -date resolvedat -resolvedat ignoredat -ignoredat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | detectorgroupname | string Example: detectorgroupname=slackbot_tokenIncidents belonging to the specified detector group. | | ignorerid | integer Example: ignorerid=4932Incidents ignored by this user id. | | ignorerapitokenid | string <uuid> Example: ignorerapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents ignored by this API token id. | | resolverid | integer Example: resolverid=4932Incidents resolved by this user id. | | resolverapitokenid | string <uuid> Example: resolverapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents resolved by this API token id. | Pagination cursor. Number of items to list per page. Entries found before this date. Entries found after this date. Incidents assigned to this email. Incidents assigned to this user id. Incidents with the following status. Filter incidents by severity. Secrets with the following validity. Incidents with one of the following" }, { "data": "Use NONE if you want to filter incidents with no tags. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Incidents belonging to the specified detector group. Incidents ignored by this user id. Incidents ignored by this API token id. Incidents resolved by this user id. Incidents resolved by this API token id. Secret Incident List Invalid data Invalid API key API under maintenance List secret incidents of a particular team. Occurrences are not returned in this route. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | datebefore | string <datetime> Example: datebefore=2019-08-30T14:15:22ZEntries found before this date. | | dateafter | string <datetime> Example: dateafter=2019-08-22T14:15:22ZEntries found after this date. | | assigneeemail | string Example: assigneeemail=eric@gitguardian.comIncidents assigned to this email. | | assigneeid | integer Example: assigneeid=4932Incidents assigned to this user id. | | status | string Enum: IGNORED TRIGGERED ASSIGNED RESOLVED Incidents with the following status. | | severity | string Enum: critical high medium low info unknown Filter incidents by severity. | | validity | string Enum: valid invalid failedtocheck no_checker unknown Secrets with the following validity. | | tags | string Enum: DEFAULTBRANCH FROMHISTORICALSCAN CHECKRUNSKIPFALSEPOSITIVE CHECKRUNSKIPLOWRISK CHECKRUNSKIPTESTCRED PUBLIC PUBLICLYEXPOSED PUBLICLYLEAKED REGRESSION SENSITIVEFILE TESTFILE NONE Example: tags=FROMHISTORICALSCAN,SENSITIVEFILEIncidents with one of the following tags. Use NONE if you want to filter incidents with no tags. | | ordering | string Enum: date -date resolvedat -resolvedat ignoredat -ignoredat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | detectorgroupname | string Example: detectorgroupname=slackbot_tokenIncidents belonging to the specified detector group. | | ignorerid | integer Example: ignorerid=4932Incidents ignored by this user id. | | ignorerapitokenid | string <uuid> Example: ignorerapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents ignored by this API token id. | | resolverid | integer Example: resolverid=4932Incidents resolved by this user id. | | resolverapitokenid | string <uuid> Example: resolverapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents resolved by this API token id. | Pagination cursor. Number of items to list per page. Entries found before this date. Entries found after this date. Incidents assigned to this email. Incidents assigned to this user id. Incidents with the following status. Filter incidents by severity. Secrets with the following validity. Incidents with one of the following tags. Use NONE if you want to filter incidents with no tags. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Incidents belonging to the specified detector group. Incidents ignored by this user id. Incidents ignored by this API token id. Incidents resolved by this user id. Incidents resolved by this API token id. Secret Incident List Invalid data Invalid API key API under maintenance Manage occurrences found by post-receive hooks on your GitGuardian Dashboard. List occurrences of secrets in the monitored perimeter. | 0 | 1 | |:|:--| | cursor | string Pagination cursor. | | page | integer >= 0 Deprecated Default: 1 Page number. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | datebefore | string <datetime> Example: datebefore=2019-08-30T14:15:22ZEntries found before this date. | | dateafter | string <datetime> Example: dateafter=2019-08-22T14:15:22ZEntries found after this" }, { "data": "| | sourceid | integer Example: sourceid=5523Filter on the source ID. | | sourcename | string Example: sourcename=gitguardian/test-repositoryEntries matching this source name search. | | sourcetype | string Enum: bitbucket github gitlab azuredevops slack jiracloud confluencecloud microsoftteams Example: sourcetype=githubFilter by integration type. | | incident_id | integer Filter by incident ID. | | presence | string Enum: present removed Entries that have the following presence status. | | authorname | string >= 3 characters Example: authorname=John DoeEntries matching this author name search. | | authorinfo | string >= 3 characters Example: authorinfo=john.doe@gitguardian.comEntries matching this author email search. | | sha | string >= 3 characters Example: sha=fccebf0562698ab99dc10dcb2e864fc563b25ac4Entries starting with the commit sha search string. | | filepath | string >= 3 characters Example: filepath=myfile.txtEntries matching this filepath search. | | tags | string Enum: DEFAULTBRANCH FROMHISTORICALSCAN CHECKRUNSKIPFALSEPOSITIVE CHECKRUNSKIPLOWRISK CHECKRUNSKIPTESTCRED PUBLIC PUBLICLYEXPOSED PUBLICLYLEAKED REGRESSION SENSITIVEFILE TESTFILE NONE Example: tags=FROMHISTORICALSCAN,SENSITIVEFILEOccurrences with one of the following tags. Use NONE if you want to filter occurrences with no tags. | | ordering | string Enum: date -date Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | Pagination cursor. Page number. Number of items to list per page. Entries found before this date. Entries found after this date. Filter on the source ID. Entries matching this source name search. Filter by integration type. Filter by incident ID. Entries that have the following presence status. Entries matching this author name search. Entries matching this author email search. Entries starting with the commit sha search string. Entries matching this filepath search. Occurrences with one of the following tags. Use NONE if you want to filter occurrences with no tags. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. List occurrences Invalid data Invalid API key API under maintenance Manage notes on incidents found by post-receive hooks on your GitGuardian Dashboard. List notes left on a secret incident in chronological order. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | page | integer >= 0 Deprecated Default: 1 Page number. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | ordering | string Enum: createdat -createdat updatedat -updatedat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | memberid | integer Example: memberid=1Filter by member id. | | search | string Example: search=I revoked thisSearch notes based on the comment field content. | Pagination cursor. Page number. Number of items to list per page. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Filter by member id. Search notes based on the comment field content. Incident Note List Invalid data Invalid API key Secret Incident not found API under maintenance Add a note on a secret incident. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | The id of the incident to retrieve | 0 | 1 | |:-|:| | commentrequired | string Content of the incident note | Content of the incident note The note was created" }, { "data": "Invalid data Invalid API key Secret Incident not found. API under maintenance Update an existing comment on a secret incident. Only incident notes created by the current API key can be updated. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | | note_idrequired | integer The id of the incident note to update | The id of the incident to retrieve The id of the incident note to update | 0 | 1 | |:-|:| | commentrequired | string Content of the incident note | Content of the incident note The note was updated successfully. Invalid data Invalid API key You cannot update an Incident Note you that was not created by the current API key. Secret Incident or Incident Note not found. API under maintenance Delete an existing comment on a secret incident. Only incident notes created by the current API key can be deleted. | 0 | 1 | |:--|:-| | incident_idrequired | integer The id of the incident to retrieve | | note_idrequired | integer The id of the incident note to delete | The id of the incident to retrieve The id of the incident note to delete The note was deleted successfully. Invalid data Invalid API key You cannot delete an Incident Note you that was not created by the current API key. Secret Incident or Incident Note not found. API under maintenance Manage honeytokens. This endpoint allows you to list all the honeytokens of your workspace. The response contains the list of honeytokens and a pagination cursor to retrieve the next page. The honeytokens are sorted by id. If you are using a personal access token, you need to have a role superior or equal to manager. | 0 | 1 | |:|:| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | status | string Enum: triggered active revoked Status of a honeytoken. | | type | string Value: AWS Type of a honeytoken. | | search | string Search honeytokens based on their name and/or description. | | creator_id | number Member id of the honeytoken creator. | | revoker_id | number Member id of the honeytoken revoker. | | creatorapitoken_id | string Token id of the honeytoken creator. | | revokerapitoken_id | string Token id of the honeytoken creator. | | tags | string Comma-separated list of tags to filter on. | | ordering | string Enum: createdat -createdat triggeredat -triggeredat revokedat -revokedat name -name Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | showtoken | boolean Default: false Show token details (accesstokenid and secretkey). | Pagination cursor. Number of items to list per page. Status of a honeytoken. Type of a honeytoken. Search honeytokens based on their name and/or description. Member id of the honeytoken creator. Member id of the honeytoken revoker. Token id of the honeytoken creator. Token id of the honeytoken creator. Comma-separated list of tags to filter on. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Show token details (accesstokenid and secret_key). List honeytokens Invalid data Invalid API key Forbidden Call API under maintenance This endpoint allows you to create a honeytoken of a" }, { "data": "If you are using a personal access token, you need to have a role superior or equal to manager. | 0 | 1 | |:-|:| | namerequired | string honeytoken name. | | description | string honeytoken description. | | typerequired | string Value: AWS honeytoken type | | labels | Array of objects (Honeytoken Label) Labels to set on the honeytoken. If the label doesn't exist, it will be created. | honeytoken name. honeytoken description. honeytoken type Labels to set on the honeytoken. If the label doesn't exist, it will be created. Honeytoken created Invalid data Invalid API key Forbidden Call Data conflict API under maintenance This endpoint allows you to create a honeytoken of a given type within a context. The context is a realistic file in which your honeytoken is inserted. If language, project_extensions and filename are not provided, a random context will be generated. | 0 | 1 | |:-|:--| | namerequired | string Honeytoken name. | | description | string Honeytoken description. | | typerequired | string Value: AWS Honeytoken type. | | labels | Array of objects (Honeytoken Label) Labels to set on the honeytoken. If the label doesn't exist, it will be created. | | language | string Language to use for the context. If not set but project_extensions is set, the languages will be inferred from the extensions. | | filename | string Filename to use for the context. | | project_extensions | Array of strings An array of file extensions that can be used for the context. | Honeytoken name. Honeytoken description. Honeytoken type. Labels to set on the honeytoken. If the label doesn't exist, it will be created. Language to use for the context. If not set but project_extensions is set, the languages will be inferred from the extensions. Filename to use for the context. An array of file extensions that can be used for the context. Honeytoken within a context created Invalid data Invalid API key Forbidden Call Data conflict API under maintenance Retrieve an existing honeytoken. If you are using a personal access token, you need to have a role greater or equal to manager. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | The id of the honeytoken to retrieve | 0 | 1 | |:--|:-| | showtoken | boolean Default: false Show token details (accesstokenid and secretkey). | Show token details (accesstokenid and secret_key). Honeytoken details Invalid data Invalid API key Forbidden Call Honeytoken not found API under maintenance Update a name or descriptions of an existing honeytoken. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | The id of the honeytoken to retrieve | 0 | 1 | |:|:| | name | string A new honeytoken name | | description | string A new honeytoken description | | labels | Array of objects (Honeytoken Label) A new set of labels for the honeytoken. Will completely override the former labels. | A new honeytoken name A new honeytoken description A new set of labels for the honeytoken. Will completely override the former labels. The honeytoken was updated successfully. Invalid data Invalid API key API under maintenance Resets a triggered honeytoken. All the associated events will be closed. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | The id of the honeytoken to retrieve Honeytoken was successfully" }, { "data": "Invalid data Invalid API key API under maintenance Revokes an active or triggered honeytoken. All the associated events will be closed. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | The id of the honeytoken to retrieve Honeytoken was successfully revoked. Invalid data Invalid API key API under maintenance Manage events on honeytokens. List events related to all honeytokens of the workspace. | 0 | 1 | |:--|:--| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | ordering | string Enum: triggeredat -triggeredat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-' | | honeytokenid | string <uuid> Example: honeytokenid=d45a123f-b15d-4fea-abf6-ff2a8479de5bFilter by honeytoken id | | status | string Default: open Enum: open archived allowed Filter by status | | ipaddress | string Example: ipaddress=8.8.8.8Filter by ip address | | tags | string Comma-separated list of tags to filter on | | search | string Example: search=I revoked thisSearch events based on the data field content | Pagination cursor. Number of items to list per page. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-' Filter by honeytoken id Filter by status Filter by ip address Comma-separated list of tags to filter on Search events based on the data field content Honeytokens Events List Invalid data Invalid API key API under maintenance Manage notes on honeytokens. List notes left on a honeytoken in chronological order. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | The id of the honeytoken to retrieve | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | ordering | string Enum: createdat -createdat updatedat -updatedat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | memberid | integer Example: memberid=1Filter by member id. | | apitokenid | string <uuid> Example: apitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Entries matching this API token id. | | search | string Example: search=I revoked thisSearch notes based on the comment field content. | Pagination cursor. Number of items to list per page. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Filter by member id. Entries matching this API token id. Search notes based on the comment field content. Honeytoken Note List Invalid data Invalid API key Honeytoken not found API under maintenance Add a note on a honeytoken. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | The id of the honeytoken to retrieve | 0 | 1 | |:-|:--| | commentrequired | string Content of the honeytoken note | Content of the honeytoken note The note was created successfully. Invalid data Invalid API key Honeytoken not found. API under maintenance Update an existing comment on a honeytoken. Only honeytoken notes created by the current API key can be" }, { "data": "| 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | | note_idrequired | string <uuid> The id of the honeytoken note to update | The id of the honeytoken to retrieve The id of the honeytoken note to update | 0 | 1 | |:-|:--| | commentrequired | string Content of the honeytoken note | Content of the honeytoken note The note was updated successfully. Invalid data Invalid API key You cannot update an Honeytoken Note you that was not created by the current API key. Honeytoken or Honeytoken Note not found. API under maintenance Delete an existing comment on a honeytoken. Only honeytoken notes created by the current API key can be deleted. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | | note_idrequired | string <uuid> The id of the honeytoken note to update | The id of the honeytoken to retrieve The id of the honeytoken note to update The note was deleted successfully. Invalid data Invalid API key You cannot delete an Honeytoken Note you that was not created by the current API key. Honeytoken or Honeytoken Note not found. API under maintenance Manage sources on honeytokens. List sources where a honeytoken appears. | 0 | 1 | |:-|:| | honeytoken_idrequired | string <uuid> The id of the honeytoken to retrieve | The id of the honeytoken to retrieve | 0 | 1 | |:|:| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | ordering | string Enum: sourceid -sourceid Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | Pagination cursor. Number of items to list per page. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Honeytoken Sources List Invalid data Invalid API key Honeytoken not found API under maintenance Manage labels for honeytokens. Labels provide a flexible way to organize honeytokens. You can create your own labels in the form of keys and values, assign them to honeytokens, and use them to filter and search for honeytokens based on specific characteristics. List labels created for honeytokens in chronological order. | 0 | 1 | |:|:| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | search | string Example: search=env:productionSearch string to filter only labels which contains the search string in either its key or value. The search string can also be in the key:value format. | | key | string Example: key=envFilter only labels which have the given key. | Pagination cursor. Number of items to list per page. Search string to filter only labels which contains the search string in either its key or value. The search string can also be in the key:value format. Filter only labels which have the given key. Honeytoken Labels List Invalid data Invalid API key Forbidden Call API under maintenance Create a label for honeytokens. | 0 | 1 | |:--|:--| | keyrequired | string Label's key's content. | | valuerequired | string Label's value's content. | Label's key's content. Label's value's content. The label was created successfully. Invalid data Invalid API key Forbidden Call API under maintenance Rename a label key. It will be renamed in all the labels using this" }, { "data": "| 0 | 1 | |:-|:| | oldkeyrequired | string Example: oldkey=envan existing key that we want to rename | | newkeyrequired | string Example: newkey=env proda new name of the key | an existing key that we want to rename a new name of the key The key was changed successfully. Invalid data Invalid API key Forbidden Call API under maintenance Delete a key. All the labels using this key will be deleted. | 0 | 1 | |:-|:--| | key | string Example: key=envA specified key to use to delete all labels which have the key matched. | A specified key to use to delete all labels which have the key matched. The key was deleted successfully. Invalid data Invalid API key Forbidden Call API under maintenance Rename the value of a label. | 0 | 1 | |:--|:| | label_idrequired | string Example: 5ddaad0c-5a0c-4674-beb5-1cd198d13360Id of the label. | Id of the label. | 0 | 1 | |:--|:--| | valuerequired | string New value for the label. | New value for the label. The value was changed successfully. Invalid data Invalid API key Forbidden Call API under maintenance Delete a label for honeytokens. | 0 | 1 | |:--|:| | label_idrequired | string Example: 5ddaad0c-5a0c-4674-beb5-1cd198d13360Id of the label. | Id of the label. The label was deleted successfully. Invalid API key Forbidden Call Label not found API under maintenance Retrieve details on sources known by GitGuardian. List sources known by GitGuardian. | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | page | integer >= 0 Deprecated Default: 1 Page number. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | search | string Example: search=test-repositorySources matching this search. | | lastscanstatus | string Enum: pending running canceled failed too_large timeout finished Filter sources based on the status of their latest historical scan. | | health | string Enum: safe unknown at_risk Filter sources based on their health status. | | type | string Enum: bitbucket github gitlab azuredevops slack jiracloud confluencecloud microsoftteams Example: type=githubFilter by integration type. | | ordering | string Enum: lastscandate -lastscandate Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | visibility | string Enum: public private internal Example: visibility=publicFilter by visibility status. | | externalid | string Example: externalid=1Filter by specific external id. | | sourcecriticality | string Enum: critical high medium low unknown Example: sourcecriticality=criticalFilter by source criticality. | | monitored | boolean Example: monitored=trueFilter by monitored value. | Pagination cursor. Page number. Number of items to list per page. Sources matching this search. Filter sources based on the status of their latest historical scan. Filter sources based on their health status. Filter by integration type. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Filter by visibility status. Filter by specific external id. Filter by source criticality. Filter by monitored value. Source List Invalid data Invalid API key API under maintenance Retrieve a source known by GitGuardian. | 0 | 1 | |:|:-| | source_idrequired | integer Example: 5523The id of the source to retrieve. | The id of the source to retrieve. Source List Invalid data Invalid API key API under maintenance List secret incidents linked to a source. Occurrences are not returned in this" }, { "data": "| 0 | 1 | |:|:--| | source_idrequired | integer Example: 5523The id of the source to filter on. | The id of the source to filter on. | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | datebefore | string <datetime> Example: datebefore=2019-08-30T14:15:22ZEntries found before this date. | | dateafter | string <datetime> Example: dateafter=2019-08-22T14:15:22ZEntries found after this date. | | assigneeemail | string Example: assigneeemail=eric@gitguardian.comIncidents assigned to this email. | | assigneeid | integer Example: assigneeid=4932Incidents assigned to this user id. | | status | string Enum: IGNORED TRIGGERED ASSIGNED RESOLVED Incidents with the following status. | | severity | string Enum: critical high medium low info unknown Filter incidents by severity. | | validity | string Enum: valid invalid failedtocheck no_checker unknown Secrets with the following validity. | | tags | string Enum: DEFAULTBRANCH FROMHISTORICALSCAN CHECKRUNSKIPFALSEPOSITIVE CHECKRUNSKIPLOWRISK CHECKRUNSKIPTESTCRED PUBLIC PUBLICLYEXPOSED PUBLICLYLEAKED REGRESSION SENSITIVEFILE TESTFILE NONE Example: tags=FROMHISTORICALSCAN,SENSITIVEFILEIncidents with one of the following tags. Use NONE if you want to filter incidents with no tags. | | ordering | string Enum: date -date resolvedat -resolvedat ignoredat -ignoredat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | detectorgroupname | string Example: detectorgroupname=slackbot_tokenIncidents belonging to the specified detector group. | | ignorerid | integer Example: ignorerid=4932Incidents ignored by this user id. | | ignorerapitokenid | string <uuid> Example: ignorerapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents ignored by this API token id. | | resolverid | integer Example: resolverid=4932Incidents resolved by this user id. | | resolverapitokenid | string <uuid> Example: resolverapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents resolved by this API token id. | Pagination cursor. Number of items to list per page. Entries found before this date. Entries found after this date. Incidents assigned to this email. Incidents assigned to this user id. Incidents with the following status. Filter incidents by severity. Secrets with the following validity. Incidents with one of the following tags. Use NONE if you want to filter incidents with no tags. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Incidents belonging to the specified detector group. Incidents ignored by this user id. Incidents ignored by this API token id. Incidents resolved by this user id. Incidents resolved by this API token id. Secret Incident List Invalid data Invalid API key API under maintenance Retrieve details about workspace members. List members of the workspace. | 0 | 1 | |:|:-| | cursor | string Pagination cursor. | | page | integer >= 0 Deprecated Default: 1 Page number. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | role | string Enum: owner manager member restricted Filter members based on their role. | | search | string Search members based on their name or email. | | ordering | string Enum: createdat -createdat lastlogin -lastlogin Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | Pagination cursor. Page number. Number of items to list per page. Filter members based on their role. Search members based on their name or email. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a" }, { "data": "List members Invalid data Invalid API key API under maintenance Retrieve an existing workspace member. If you are using a personal access token, you need to have a role greater or equal to member. | 0 | 1 | |:|:| | member_idrequired | integer The id of the workspace member | The id of the workspace member Workspace member details Invalid data Invalid API key Forbidden Call Member not found API under maintenance Delete an existing workspace member. If you are using a personal access token, you need to have a role greater or equal to manager. | 0 | 1 | |:|:| | member_idrequired | integer The id of the workspace member | The id of the workspace member The member was deleted successfully Invalid data Invalid API key Forbidden Call Member not found API under maintenance Update an existing workspace member. If you are using a personal access token, you need to have a role greater or equal to manager. | 0 | 1 | |:|:| | member_idrequired | integer The id of the workspace member | The id of the workspace member | 0 | 1 | |:--|:| | role | string Enum: owner manager member restricted | Workspace member details Invalid data Invalid API key Forbidden Call Member not found API under maintenance List teams of a workspace member. The response contains the list of teams and a pagination cursor to retrieve the next page. The teams are sorted by id. If you are using a personal access token, you need to have a role superior or equal to manager except if the requested member is yourself. | 0 | 1 | |:|:| | member_idrequired | integer The id of the workspace member | The id of the workspace member | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | search | string Search teams based on their name and/or description. | | is_global | boolean Filter on/exclude the \"All-incidents\" team. | Pagination cursor. Number of items to list per page. Search teams based on their name and/or description. Filter on/exclude the \"All-incidents\" team. List teams Invalid data Invalid API key Forbidden Call Not found API under maintenance List team memberships of a workspace member. The response contains the list of team memberships and a pagination cursor to retrieve the next page. The team memberships are sorted by id. If you are using a personal access token, you need to have a role superior or equal to manager except if the requested member is yourself. | 0 | 1 | |:|:| | member_idrequired | integer The id of the workspace member | The id of the workspace member | 0 | 1 | |:|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | team_id | integer The id of a team to filter on | Pagination cursor. Number of items to list per page. The id of a team to filter on Team membership list Invalid data Invalid API key Permission denied Not found API under maintenance Manage workspace invitations. This endpoint allows you to list all pending invitations. The response contains the list of invitations and a pagination cursor to retrieve the next page. The invitations are sorted by" }, { "data": "If you are using a personal access token, you need to have a role superior or equal to member. | 0 | 1 | |:|:--| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | search | string Search invitations based on the email field. | | ordering | string Enum: date -date Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | Pagination cursor. Number of items to list per page. Search invitations based on the email field. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. List invitations Invalid data Invalid API key Forbidden Call API under maintenance This endpoint allows you to send an invitation to a user. If you are using a personal access token, you need to have a role superior or equal to member. | 0 | 1 | |:--|:-| | emailrequired | string email of the user to invite. | | role | string Default: member Enum: manager member restricted | email of the user to invite. Invitation sent Invalid data Invalid API key Forbidden Call Data conflict API under maintenance Delete an existing invitation. If you are using a personal access token, you need to have a role superior or equal to manager. | 0 | 1 | |:-|:| | invitation_idrequired | integer The id of the invitation to retrieve | The id of the invitation to retrieve The invitation was deleted successfully. Invalid data Invalid API key Forbidden Call Invitation not found API under maintenance Resend an existing invitation. If you are using a personal access token, you need to have a role superior or equal to manager. | 0 | 1 | |:-|:| | invitation_idrequired | integer The id of the invitation to retrieve | The id of the invitation to retrieve Invitation sent Invalid data Invalid API key Forbidden Call Invitation not found Too many requests API under maintenance Use GitGuardian's brain at your leisure. Scanning API allows you to scan any content you want using GitGuardian's secrets detection algorithms. Scan provided document content for policy breaks. Request body shouldn't exceed 1MB. This endpoint is stateless and as such will not store in our servers neither the documents nor the secrets found. | 0 | 1 | |:--|:-| | filename | string <= 256 characters | | documentrequired | string | Successful Scan Invalid data Invalid API key Quota limit reached API under maintenance Scan provided document contents for policy breaks. Multiple documents are returned by the same index order. There should not be more than 20 documents in the payload. Individual documents should not exceed 1MB. Quota usage is based on requests and not on the content size. One request to this endpoint will consume 1 API call. Also note that the quota is set on a rolling month and not on a calendar month. See this documentation for more details. This endpoint is stateless and as such will not store in our servers neither the documents nor the secrets found. | 0 | 1 | |:--|:-| | filename | string <= 256 characters | | documentrequired | string | Successful Scan Invalid data Invalid API key Quota limit reached API under maintenance Retrieve details on detectors used for secret detection. List secret detectors. | 0 | 1 | |:-|:--| | cursor | string Pagination" }, { "data": "| | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | isactive | boolean Example: isactive=trueFilter only active or inactive detectors. | | type | string Enum: specific generic custom Example: type=genericFilter detectors on their type. | | search | string Example: search=aws | | ordering | string Enum: name -name Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | Pagination cursor. Number of items to list per page. Filter only active or inactive detectors. Filter detectors on their type. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. List of Secret Detectors Invalid data Invalid API key Permission denied API under maintenance Get a secret detector. | 0 | 1 | |:-|:--| | detectornamerequired | string Example: awsiamName of the detector to retrieve | Name of the detector to retrieve Secret Detector Invalid API key Permission denied Secret Detector not found API under maintenance Retrieve details about quota usage. Check available scanning calls for this token. Quota is shared between all tokens of a workspace Quota Overview Invalid API key API under maintenance Software Composition Analysis Compute SCA files amongst a list of files. | 0 | 1 | |:|:| | files | string List of input files, comma separated. | List of input files, comma separated. Scan a directory using GitGuardian SCA engine | 0 | 1 | |:|:--| | scan_parameters | object (Scan Parameters) Default: [object Object] | | directoryrequired | string (Directory) | Scan a diff using GitGuardian SCA engine | 0 | 1 | |:|:--| | scan_parameters | object (Scan Parameters) Default: [object Object] | | referencerequired | string (Reference) | | currentrequired | string (Current) | This feature is experimental. Scan a tar file directory seeking potential IaC (Infrastructure as Code) vulnerabilities. Return a list of vulnerabilities found in the files of the scanned directory. | 0 | 1 | |:|:--| | directoryrequired | object <tar> Tar file containing the folder to be scanned. | | scan_parameters | object (Scan Parameters) Default: [object Object] | Tar file containing the folder to be scanned. Scan success Invalid data Scanning error This feature is experimental. Infrastructure as Code (IaC) differential scanning compares two states of the same project and reports any change in the vulnerabilities of your configuration. Scan tar files containing two states of the same directory (under the names reference and current) seeking changes in potential IaC (Infrastructure as Code) vulnerabilities. Return the status of vulnerabilities found in the files of the compared scanned directories. | 0 | 1 | |:|:--| | referencerequired | object <tar> Tar file containing reference folder to be scanned. | | currentrequired | object <tar> Tar file containing current folder to be scanned. | | scan_parameters | object (Scan Parameters) Default: [object Object] | Tar file containing reference folder to be scanned. Tar file containing current folder to be scanned. Scan success Invalid data Scanning error Manage teams. This endpoint allows you to list all the teams of your workspace. The response contains the list of teams and a pagination cursor to retrieve the next page. The teams are sorted by id. If you are using a personal access token, you need to have a role superior or equal to member. | 0 | 1 | |:-|:-| | cursor | string Pagination" }, { "data": "| | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | is_global | boolean Filter on/exclude the \"All-incidents\" team. | | search | string Search teams based on their name and/or description. | Pagination cursor. Number of items to list per page. Filter on/exclude the \"All-incidents\" team. Search teams based on their name and/or description. List teams Invalid data Invalid API key Forbidden Call API under maintenance This endpoint allows you to create a team. If you are using a personal access token, you need to have a role superior or equal to manager. If a personal access token is being used, the member is automatically added to the created team with permissions canmanage and fullaccess | 0 | 1 | |:-|:| | namerequired | string | | description | string or null team description. | team description. Team created Invalid data Invalid API key Forbidden Call Data conflict API under maintenance Retrieve an existing team. If you are using a personal access token, you need to have a role greater or equal to member. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team Team details Invalid data Invalid API key Forbidden Call Team not found API under maintenance Delete an existing team. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. The \"All-incidents\" team (is_global=true) cannot be deleted. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team The team was deleted successfully. Invalid data Invalid API key Forbidden Call Team not found API under maintenance Update a team's name and/or its description. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. The \"All-incidents\" team (is_global=true) cannot be updated. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:|:| | name | string | | description | string or null team description. | team description. The team was updated successfully. Invalid data Invalid API key API under maintenance List secret incidents of a particular team. Occurrences are not returned in this route. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | datebefore | string <datetime> Example: datebefore=2019-08-30T14:15:22ZEntries found before this date. | | dateafter | string <datetime> Example: dateafter=2019-08-22T14:15:22ZEntries found after this date. | | assigneeemail | string Example: assigneeemail=eric@gitguardian.comIncidents assigned to this email. | | assigneeid | integer Example: assigneeid=4932Incidents assigned to this user id. | | status | string Enum: IGNORED TRIGGERED ASSIGNED RESOLVED Incidents with the following status. | | severity | string Enum: critical high medium low info unknown Filter incidents by severity. | | validity | string Enum: valid invalid failedtocheck no_checker unknown Secrets with the following validity. | | tags | string Enum: DEFAULTBRANCH FROMHISTORICALSCAN CHECKRUNSKIPFALSEPOSITIVE CHECKRUNSKIPLOWRISK CHECKRUNSKIPTESTCRED PUBLIC PUBLICLYEXPOSED PUBLICLYLEAKED REGRESSION SENSITIVEFILE TESTFILE NONE Example: tags=FROMHISTORICALSCAN,SENSITIVEFILEIncidents with one of the following" }, { "data": "Use NONE if you want to filter incidents with no tags. | | ordering | string Enum: date -date resolvedat -resolvedat ignoredat -ignoredat Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | detectorgroupname | string Example: detectorgroupname=slackbot_tokenIncidents belonging to the specified detector group. | | ignorerid | integer Example: ignorerid=4932Incidents ignored by this user id. | | ignorerapitokenid | string <uuid> Example: ignorerapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents ignored by this API token id. | | resolverid | integer Example: resolverid=4932Incidents resolved by this user id. | | resolverapitokenid | string <uuid> Example: resolverapitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Incidents resolved by this API token id. | Pagination cursor. Number of items to list per page. Entries found before this date. Entries found after this date. Incidents assigned to this email. Incidents assigned to this user id. Incidents with the following status. Filter incidents by severity. Secrets with the following validity. Incidents with one of the following tags. Use NONE if you want to filter incidents with no tags. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Incidents belonging to the specified detector group. Incidents ignored by this user id. Incidents ignored by this API token id. Incidents resolved by this user id. Incidents resolved by this API token id. Secret Incident List Invalid data Invalid API key API under maintenance Manage team invitations. List all existing team invitations. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:--|:--| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | invitation_id | integer The id of an invitation to filter on | | teampermission | string Enum: canmanage cannotmanage Example: teampermission=can_manageFilter team memberships with a specific team permission | | incidentpermission | string Enum: canview canedit fullaccess Example: incidentpermission=caneditFilter team memberships with a specific incident permission | Pagination cursor. Number of items to list per page. The id of an invitation to filter on Filter team memberships with a specific team permission Filter team memberships with a specific incident permission Team invitation list Invalid data Invalid API key Permission denied Not found API under maintenance This endpoint allows you to create a team invitation from an existing team and invitation. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:-|:-| | invitation_idrequired | integer | | teampermission | string Enum: canmanage cannot_manage | | incidentpermission | string Enum: canview canedit fullaccess | Team invitation created Invalid data Invalid API key Permission denied Not found Data conflict API under maintenance Update permissions of a team invitation. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace" }, { "data": "| 0 | 1 | |:|:--| | team_idrequired | integer The id of the team | | teaminvitationidrequired | integer The id of the team invitation | The id of the team The id of the team invitation | 0 | 1 | |:--|:-| | teampermission | string Enum: canmanage cannot_manage | | incidentpermission | string Enum: canview canedit fullaccess | Team invitation was updated successfully. Invalid data Invalid API key Permission denied Not found API under maintenance Delete an existing team invitation. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. | 0 | 1 | |:|:--| | team_idrequired | integer The id of the team | | teaminvitationidrequired | integer The id of the team invitation | The id of the team The id of the team invitation Team invitation was deleted successfully. Invalid API key Permission denied Not found API under maintenance Add or remove members from teams, or update their permissions. List all the memberships of a team. If you are using a personal access token, you need to be a workspace manager or be part of the team. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:--|:--| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | teampermission | string Enum: canmanage cannotmanage Example: teampermission=can_manageFilter team memberships with a specific team permission | | incidentpermission | string Enum: canview canedit fullaccess Example: incidentpermission=caneditFilter team memberships with a specific incident permission | | memberid | number Example: memberid=1234Filter team memberships on a specific member | Pagination cursor. Number of items to list per page. Filter team memberships with a specific team permission Filter team memberships with a specific incident permission Filter team memberships on a specific member Team membership list Invalid data Invalid API key Permission denied Not found API under maintenance Add a member to a team. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:--|:-| | member_id | integer Id of a workspace member. | | teampermission | string Enum: canmanage cannot_manage | | incidentpermission | string Enum: canview canedit fullaccess | Id of a workspace member. Team membership created Invalid data Invalid API key Permission denied Not found Data conflict API under maintenance Update permissions of a team membership. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. | 0 | 1 | |:|:--| | team_idrequired | integer The id of the team | | teammembershipidrequired | integer The id of the team membership | The id of the team The id of the team membership | 0 | 1 | |:--|:-| | teampermission | string Enum: canmanage cannot_manage | | incidentpermission | string Enum: canview canedit fullaccess | Team membership was updated successfully. Invalid data Invalid API key Permission denied Not found API under maintenance Remove a member from a team. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager, or be the member being" }, { "data": "| 0 | 1 | |:|:--| | team_idrequired | integer The id of the team | | teammembershipidrequired | integer The id of the team membership | The id of the team The id of the team membership Team membership was deleted successfully. Invalid API key Permission denied Not found API under maintenance List pending requests of a team. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:-|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | memberid | number Example: memberid=1234Filter requests coming from a specific member | Pagination cursor. Number of items to list per page. Filter requests coming from a specific member Team request list Invalid data Invalid API key Permission denied Not found API under maintenance Create an access request to a team. You must be authenticated via a Personal Access Token. You must not already have a pending request on the team, be a member of the team, be a workspace manager or have the restricted role. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team Team request created Invalid data Invalid API key Permission denied Not found Data conflict API under maintenance Cancel or decline a team request. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager, or be the member who created the request being cancelled. | 0 | 1 | |:|:--| | team_idrequired | integer The id of the team | | teamrequestidrequired | integer The id of the team request | The id of the team The id of the team request Team request deleted Invalid API key Permission denied Not found API under maintenance Accept a team request by adding the member to the team. If you are using a personal access token, you must have \"can manage\" permission on the team or be a workspace manager. | 0 | 1 | |:|:--| | team_idrequired | integer The id of the team | | teamrequestidrequired | integer The id of the team request | The id of the team The id of the team request | 0 | 1 | |:--|:-| | teampermission | string Enum: canmanage cannot_manage | | incidentpermission | string Enum: canview canedit fullaccess | Team request accepted Invalid data Invalid API key Permission denied Not found API under maintenance List pending team requests of a member. If you are using a personal access token, you need to be either a workspace manager or the member being queried. | 0 | 1 | |:|:| | member_idrequired | integer The id of the workspace member | The id of the workspace member | 0 | 1 | |:|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | teamid | number Example: teamid=1234Filter requests to a specific team | Pagination cursor. Number of items to list per page. Filter requests to a specific team Team request list Invalid data Invalid API key Permission denied Not found API under maintenance List sources belonging to a team's" }, { "data": "| 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:--|:-| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | search | string Example: search=test-repositorySources matching this search. | | lastscanstatus | string Enum: pending running canceled failed too_large timeout finished Filter sources based on the status of their latest historical scan. | | health | string Enum: safe unknown at_risk Filter sources based on their health status. | | type | string Enum: bitbucket github gitlab azure_devops Example: type=githubFilter by integration type. | | ordering | string Enum: lastscandate -lastscandate Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. | | visibility | string Enum: public private internal Example: visibility=publicFilter by visibility status. | | externalid | string Example: externalid=1Filter by specific external id. | Pagination cursor. Number of items to list per page. Sources matching this search. Filter sources based on the status of their latest historical scan. Filter sources based on their health status. Filter by integration type. Sort the results by their field value. The default sort is ASC, DESC if the field is preceded by a '-'. Filter by visibility status. Filter by specific external id. Source list Invalid data Invalid API key Permission denied Not found API under maintenance This endpoint allows you to add and remove sources from the perimeter of a team. If you are using a personal access token, you need to be a workspace manager. | 0 | 1 | |:-|:| | team_idrequired | integer The id of the team | The id of the team | 0 | 1 | |:|:| | sourcestoadd | Array of integers Ids of sources to add to the perimeter. | | sourcestoremove | Array of integers Ids of sources to remove from the perimeter. | Ids of sources to add to the perimeter. Ids of sources to remove from the perimeter. Team perimeter updated Invalid data Invalid API key Permission denied Not found API under maintenance List audit logs. | 0 | 1 | |:-|:| | cursor | string Pagination cursor. | | per_page | integer [ 1 .. 100 ] Default: 20 Number of items to list per page. | | datebefore | string <datetime> Example: datebefore=2019-08-30T14:15:22ZEntries found before this date. | | dateafter | string <datetime> Example: dateafter=2019-08-22T14:15:22ZEntries found after this date. | | eventname | string Example: eventname=user.logged_inEntries matching this event name. | | memberid | integer Example: memberid=3252The id of the member to retrieve. | | membername | string Example: membername=John SmithEntries matching this member name. | | memberemail | string Example: memberemail=john.smith@example.orgEntries matching this member email. | | apitokenid | string <uuid> Example: apitokenid=fdf075f9-1662-4cf1-9171-af50568158a8Entries matching this API token id. | | ipaddress | string Example: ipaddress=8.8.8.8Entries matching this IP address. | Pagination cursor. Number of items to list per page. Entries found before this date. Entries found after this date. Entries matching this event name. The id of the member to retrieve. Entries matching this member name. Entries matching this member email. Entries matching this API token id. Entries matching this IP address. Audit logs List Invalid data Invalid API key API under maintenance Check the status of the API and the overall system health. Check the status of the API and your token without spending your quota. API on and key valid response Invalid API key API under maintenance ``` curl" } ]
{ "category": "Provisioning", "file_name": "swagger.html.md", "project_name": "Fugue", "subcategory": "Security & Compliance" }
[ { "data": "Welcome to the Fugue API reference. For an interactive reference, see the Swagger UI. Additional documentation: See the API User Guide for more information. Last updated 2022.03.17 | 0 | 1 | |:--|:--| | Security Scheme Type | API Key | | Header parameter name: | Authorization | Lists details for all environments. Example API request here. | 0 | 1 | |:-|:| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | orderby | string Default: \"createdat\" Enum: \"createdat\" \"name\" \"lastscanat\" Field to sort the items by. Values - createdat, name, lastscanat | | order_direction | string Default: \"desc\" Enum: \"asc\" \"desc\" Direction to sort the items in. Values - asc, desc | | query | string [ 2 .. 1800 ] characters A stringified JSON array of search parameters. Alternatively, you can use individual query parameter for the following filters - q.id, q.ruleid, q.resourcetype, q.resourceprovider, q.resourceid, q.name, and q.search. q.search looks across id, name, and rule_id | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Field to sort the items by. Values - createdat, name, lastscan_at Direction to sort the items in. Values - asc, desc A stringified JSON array of search parameters. Alternatively, you can use individual query parameter for the following filters - q.id, q.ruleid, q.resourcetype, q.resourceprovider, q.resourceid, q.name, and q.search. q.search looks across id, name, and rule_id List of environments and details. | 0 | 1 | |:-|:-| | items | Array of objects (Environment) [ items ] Paginated list of environments. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | next_offset | integer Next offset to use to get the next page of items. | | count | integer Total number of items. | Paginated list of environments. Indicates whether there are more items at the next offset. Next offset to use to get the next page of items. Total number of items. BadRequestError AuthenticationError AuthorizationError InternalServerError Creates a new environment. Example API request here. Configuration options for the new environment. | 0 | 1 | |:-|:--| | name | string Name of the environment. | | provider | string Enum: \"aws\" \"awsgovcloud\" \"azure\" \"google\" \"repository\" Name of the service provider for the environment. Values - aws, awsgovcloud, azure, google, repository. (For Azure Government, use azure.) | | provider_options | object (ProviderOptions) Provider options. | | compliancefamilies | Array of strings List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE | | surveyresourcetypes | Array of strings List of resource types to be surveyed (aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | remediateresourcetypes | Array of strings List of resource types to be enforced if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service" }, { "data": "| | scanscheduleenabled | boolean Indicates if the new environment should have scans run on a schedule upon creation. Learn more about scan intervals. | | scaninterval | integer >= 300 Time in seconds between the end of one scan to the start of the next. Must also set scanschedule_enabled to true. Learn more about scan intervals. | Name of the environment. Name of the service provider for the environment. Values - aws, aws_govcloud, azure, google, repository. (For Azure Government, use azure.) Provider options. List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE List of resource types to be surveyed (aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. List of resource types to be enforced if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. Indicates if the new environment should have scans run on a schedule upon creation. Learn more about scan intervals. Time in seconds between the end of one scan to the start of the next. Must also set scanscheduleenabled to true. Learn more about scan intervals. New environment details. | 0 | 1 | |:-|:--| | id | string ID of the environment. | | tenant_id | string ID of the tenant that owns the environment. | | name | string Name of the environment. | | provider | string Enum: \"aws\" \"awsgovcloud\" \"azure\" \"google\" \"repository\" Name of the service provider for the environment. Values - aws, awsgovcloud, azure, google, repository. (For Azure Government, use azure.) | | provider_options | object (ProviderOptions) Provider options. | | compliancefamilies | Array of strings List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE | | baseline_id | string Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and enforcement. | | drift | boolean Indicates whether drift detection is enabled for the environment. Learn more about baselines, drift detection, and enforcement. | | remediation | boolean Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. | | scanstatus | string Enum: \"CREATED\" \"QUEUED\" \"INPROGRESS\" \"ERROR\" \"SUCCESS\" \"CANCELED\" Status of the current or most recently completed scan for the environment. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED | | scan_interval | integer Time in seconds between the end of one scan to the start of the next. Learn more about scan intervals. | | lastscanat | integer When the current or most recently completed scan for the environment started, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | nextscanat | integer When the next scan will start, Unix time. Learn how to convert to or from Unix time in the API User" }, { "data": "| | surveyresourcetypes | Array of strings List of resource types surveyed for the environment(aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | remediateresourcetypes | Array of strings List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | scanscheduleenabled | boolean Indicates whether the environment should have scans run on a schedule. Learn more about scan intervals. | ID of the environment. ID of the tenant that owns the environment. Name of the environment. Name of the service provider for the environment. Values - aws, aws_govcloud, azure, google, repository. (For Azure Government, use azure.) Provider options. List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and enforcement. Indicates whether drift detection is enabled for the environment. Learn more about baselines, drift detection, and enforcement. Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. Status of the current or most recently completed scan for the environment. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED Time in seconds between the end of one scan to the start of the next. Learn more about scan intervals. When the current or most recently completed scan for the environment started, Unix time. Learn how to convert to or from Unix time in the API User Guide. When the next scan will start, Unix time. Learn how to convert to or from Unix time in the API User Guide. List of resource types surveyed for the environment(aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. Indicates whether the environment should have scans run on a schedule. Learn more about scan intervals. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Retrieves details and resource summary for an environment. Example API request here. | 0 | 1 | |:|:--| | environment_id required | string Environment ID. Learn how to find your environment ID. | Environment ID. Learn how to find your environment ID. Environment details. | 0 | 1 | |:-|:--| | id | string ID of the environment. | | tenant_id | string ID of the tenant that owns the environment. | | name | string Name of the environment. | | provider | string Enum: \"aws\" \"awsgovcloud\" \"azure\" \"google\" \"repository\" Name of the service provider for the environment. Values - aws, awsgovcloud, azure, google, repository. (For Azure Government, use azure.) | | provider_options | object (ProviderOptions) Provider options. | | compliancefamilies | Array of strings List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2v2017, FBP (AWS & AWS GovCloud only)," }, { "data": "The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE | | baseline_id | string Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and enforcement. | | drift | boolean Indicates whether drift detection is enabled for the environment. Learn more about baselines, drift detection, and enforcement. | | remediation | boolean Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. | | scanstatus | string Enum: \"CREATED\" \"QUEUED\" \"INPROGRESS\" \"ERROR\" \"SUCCESS\" \"CANCELED\" Status of the current or most recently completed scan for the environment. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED | | scan_interval | integer Time in seconds between the end of one scan to the start of the next. Learn more about scan intervals. | | lastscanat | integer When the current or most recently completed scan for the environment started, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | nextscanat | integer When the next scan will start, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | surveyresourcetypes | Array of strings List of resource types surveyed for the environment(aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | remediateresourcetypes | Array of strings List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | scanscheduleenabled | boolean Indicates whether the environment should have scans run on a schedule. Learn more about scan intervals. | | resource_summary | object (ResourceSummary) Summary of resources for a scan. | ID of the environment. ID of the tenant that owns the environment. Name of the environment. Name of the service provider for the environment. Values - aws, aws_govcloud, azure, google, repository. (For Azure Government, use azure.) Provider options. List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and enforcement. Indicates whether drift detection is enabled for the environment. Learn more about baselines, drift detection, and enforcement. Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. Status of the current or most recently completed scan for the environment. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED Time in seconds between the end of one scan to the start of the next. Learn more about scan intervals. When the current or most recently completed scan for the environment started, Unix time. Learn how to convert to or from Unix time in the API User Guide. When the next scan will start, Unix time. Learn how to convert to or from Unix time in the API User Guide. List of resource types surveyed for the environment(aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service" }, { "data": "List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. Indicates whether the environment should have scans run on a schedule. Learn more about scan intervals. Summary of resources for a scan. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Updates an environment. Example API request here. | 0 | 1 | |:|:--| | environment_id required | string Environment ID. Learn how to find your environment ID. | Environment ID. Learn how to find your environment ID. Environment details to update. | 0 | 1 | |:-|:--| | name | string Name of the environment. | | provider | string Enum: \"aws\" \"awsgovcloud\" \"azure\" \"google\" \"repository\" Name of the service provider for the environment. Values - aws, awsgovcloud, azure, google, repository. (For Azure Government, use azure.) | | provider_options | object (ProviderOptionsUpdateInput) Mutable provider options. | | compliancefamilies | Array of strings List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE | | baseline_id | string Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and enforcement. | | remediation | boolean Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. | | surveyresourcetypes | Array of strings List of resource types surveyed for the environment (aws and awsgovcloud only -- for azure, see ProviderOptionsAzureUpdateInput). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | remediateresourcetypes | Array of strings List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. Learn more about baselines, drift detection, and enforcement. | | scanscheduleenabled | boolean Indicates whether an environment is scanned on a schedule. Learn more about scan intervals. | | scaninterval | integer >= 300 Time in seconds between the end of one scan to the start of the next. Must also set scanschedule_enabled to true. Learn more about scan intervals. | Name of the environment. Name of the service provider for the environment. Values - aws, aws_govcloud, azure, google, repository. (For Azure Government, use azure.) Mutable provider options. List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and enforcement. Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. List of resource types surveyed for the environment (aws and awsgovcloud only -- for azure, see ProviderOptionsAzureUpdateInput). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. Learn more about baselines, drift detection, and" }, { "data": "Indicates whether an environment is scanned on a schedule. Learn more about scan intervals. Time in seconds between the end of one scan to the start of the next. Must also set scanscheduleenabled to true. Learn more about scan intervals. Updated environment details. | 0 | 1 | |:-|:--| | id | string ID of the environment. | | tenant_id | string ID of the tenant that owns the environment. | | name | string Name of the environment. | | provider | string Enum: \"aws\" \"awsgovcloud\" \"azure\" \"google\" \"repository\" Name of the service provider for the environment. Values - aws, awsgovcloud, azure, google, repository. (For Azure Government, use azure.) | | provider_options | object (ProviderOptions) Provider options. | | compliancefamilies | Array of strings List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE | | baseline_id | string Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and enforcement. | | drift | boolean Indicates whether drift detection is enabled for the environment. Learn more about baselines, drift detection, and enforcement. | | remediation | boolean Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. | | scanstatus | string Enum: \"CREATED\" \"QUEUED\" \"INPROGRESS\" \"ERROR\" \"SUCCESS\" \"CANCELED\" Status of the current or most recently completed scan for the environment. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED | | scan_interval | integer Time in seconds between the end of one scan to the start of the next. Learn more about scan intervals. | | lastscanat | integer When the current or most recently completed scan for the environment started, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | nextscanat | integer When the next scan will start, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | surveyresourcetypes | Array of strings List of resource types surveyed for the environment(aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | remediateresourcetypes | Array of strings List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. | | scanscheduleenabled | boolean Indicates whether the environment should have scans run on a schedule. Learn more about scan intervals. | ID of the environment. ID of the tenant that owns the environment. Name of the environment. Name of the service provider for the environment. Values - aws, aws_govcloud, azure, google, repository. (For Azure Government, use azure.) Provider options. List of compliance families validated against the environment. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE Scan ID of the baseline if baseline is enabled. Learn how to find a scan ID. Learn more about baselines, drift detection, and" }, { "data": "Indicates whether drift detection is enabled for the environment. Learn more about baselines, drift detection, and enforcement. Indicates whether enforcement is enabled for the environment. Enforcement is supported for AWS & AWS GovCloud. Learn more about baselines, drift detection, and enforcement. Status of the current or most recently completed scan for the environment. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED Time in seconds between the end of one scan to the start of the next. Learn more about scan intervals. When the current or most recently completed scan for the environment started, Unix time. Learn how to convert to or from Unix time in the API User Guide. When the next scan will start, Unix time. Learn how to convert to or from Unix time in the API User Guide. List of resource types surveyed for the environment(aws and awsgovcloud only -- for azure, see ProviderOptionsAzure). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. List of resource types enforced for the environment if enforcement is enabled (aws and awsgovcloud only). Find resource types with GET /metadata/{provider}/resourcetypes or see Service Coverage. Indicates whether the environment should have scans run on a schedule. Learn more about scan intervals. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Deletes an environment. Example API request here. | 0 | 1 | |:|:--| | environment_id required | string Environment ID. Learn how to find your environment ID. | Environment ID. Learn how to find your environment ID. Environment deleted. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Returns the rules that would be applied to this environment according to its current environment settings. Example API request here. | 0 | 1 | |:|:--| | environment_id required | string Environment ID. | Environment ID. Environment rules. | 0 | 1 | |:|:--| | items required | Array of objects (EnvironmentRulesRule) [ items ] | BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Lists scans for an environment. Example API request here. | 0 | 1 | |:|:-| | environment_id required | string ID of the environment to retrieve scans for. Learn how to find your environment ID. | | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | orderby | string Default: \"createdat\" Enum: \"createdat\" \"finishedat\" \"updatedat\" Field to sort the items by. Values - createdat, finishedat, updatedat | | order_direction | string Default: \"desc\" Enum: \"asc\" \"desc\" Direction to sort the items in. Values - asc, desc | | status | Array of stringsItems Enum: \"CREATED\" \"QUEUED\" \"INPROGRESS\" \"ERROR\" \"SUCCESS\" \"CANCELED\" Status to filter by. When not specified, all statuses will be returned. Values - CREATED, QUEUED, INPROGRESS, ERROR, SUCCESS, CANCELED | | rangefrom | integer >= 0 Earliest createdat time to return scans from, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | rangeto | integer >= 0 Latest createdat time to return scans from, Unix time. Learn how to convert to or from Unix time in the API User Guide. | ID of the environment to retrieve scans for. Learn how to find your environment ID. Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Field to sort the items by. Values - createdat, finishedat, updated_at Direction to sort the items" }, { "data": "Values - asc, desc Status to filter by. When not specified, all statuses will be returned. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED Earliest created_at time to return scans from, Unix time. Learn how to convert to or from Unix time in the API User Guide. Latest created_at time to return scans from, Unix time. Learn how to convert to or from Unix time in the API User Guide. List of scans. | 0 | 1 | |:-|:-| | items | Array of objects (Scan) [ items ] Paginated list of scans. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | next_offset | integer Next offset to use to get the next page of items. | | count | integer Total number of items. | Paginated list of scans. Indicates whether there are more items at the next offset. Next offset to use to get the next page of items. Total number of items. BadRequestError AuthenticationError AuthorizationError InternalServerError Creates and triggers a new environment scan. Example API request here. Learn more about manually initiating a scan. | 0 | 1 | |:|:--| | environment_id required | string ID of the environment to scan. Learn how to find your environment ID. | ID of the environment to scan. Learn how to find your environment ID. Scan details. | 0 | 1 | |:|:| | id | string ID of the scan. | | environment_id | string ID of the environment the scan belongs to. | | created_at | integer When the scan was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | updated_at | integer When the scan was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | finished_at | integer When the scan was finished, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | status | string Enum: \"CREATED\" \"QUEUED\" \"INPROGRESS\" \"ERROR\" \"SUCCESS\" \"CANCELED\" Status of the scan. Values - CREATED, QUEUED, INPROGRESS, ERROR, SUCCESS, CANCELED | | message | string Message related to the scan. | | remediation_error | boolean Indicates whether there were any enforcement errors on the scan. | ID of the scan. ID of the environment the scan belongs to. When the scan was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. When the scan was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. When the scan was finished, Unix time. Learn how to convert to or from Unix time in the API User Guide. Status of the scan. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED Message related to the scan. Indicates whether there were any enforcement errors on the scan. BadRequestError AuthenticationError AuthorizationError InternalServerError Retrieves details for a scan. Example API request here. | 0 | 1 | |:--|:| | scan_id required | string Scan ID. Learn how to find your scan ID. | Scan ID. Learn how to find your scan ID. Scan details. | 0 | 1 | |:|:| | id | string ID of the scan. | | environment_id | string ID of the environment the scan belongs to. | | created_at | integer When the scan was created, Unix time. Learn how to convert to or from Unix time in the API User" }, { "data": "| | updated_at | integer When the scan was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | finished_at | integer When the scan was finished, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | status | string Enum: \"CREATED\" \"QUEUED\" \"INPROGRESS\" \"ERROR\" \"SUCCESS\" \"CANCELED\" Status of the scan. Values - CREATED, QUEUED, INPROGRESS, ERROR, SUCCESS, CANCELED | | message | string Message related to the scan. | | remediation_error | boolean Indicates whether there were any enforcement errors on the scan. | | resource_summary | object (ResourceSummary) Summary of resources for a scan. | | resourcetypeerrors | Array of objects[ items ] | ID of the scan. ID of the environment the scan belongs to. When the scan was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. When the scan was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. When the scan was finished, Unix time. Learn how to convert to or from Unix time in the API User Guide. Status of the scan. Values - CREATED, QUEUED, IN_PROGRESS, ERROR, SUCCESS, CANCELED Message related to the scan. Indicates whether there were any enforcement errors on the scan. Summary of resources for a scan. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Lists compliance results by control for a scan. Example API request here. Learn about rules and controls here. | 0 | 1 | |:--|:| | scan_id required | string Scan ID. Learn how to find your scan ID. | Scan ID. Learn how to find your scan ID. | 0 | 1 | |:-|:| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | family | Array of strings Compliance family to filter by. When not specified, all compliance families will be returned. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE | | result | Array of stringsItems Enum: \"PASS\" \"FAIL\" \"UNKNOWN\" Control result to filter by. When not specified, all results will be returned. Note that in the API, a MISSING DATA state is referred to as UNKNOWN. Values - PASS, FAIL, UNKNOWN | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Compliance family to filter by. When not specified, all compliance families will be returned. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE Control result to filter by. When not specified, all results will be returned. Note that in the API, a MISSING DATA state is referred to as UNKNOWN. Values - PASS, FAIL, UNKNOWN List of compliance results from a scan grouped by" }, { "data": "| 0 | 1 | |:-|:| | items | Array of objects (ComplianceByRule) [ items ] Paginated list of compliance controls and results for a scan. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | next_offset | integer Next offset to use to get the next page of items. | | count | integer Total number of items. | Paginated list of compliance controls and results for a scan. Indicates whether there are more items at the next offset. Next offset to use to get the next page of items. Total number of items. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Lists compliance results by resource type for a scan. Example API request here. | 0 | 1 | |:--|:| | scan_id required | string Scan ID. Learn how to find your scan ID. | Scan ID. Learn how to find your scan ID. | 0 | 1 | |:--|:| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | resourcetype | Array of strings Resource types to filter by. When not specified, all resource types will be returned. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. | | family | Array of strings Compliance family to filter by. When not specified, all compliance families will be returned. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Resource types to filter by. When not specified, all resource types will be returned. Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Compliance family to filter by. When not specified, all compliance families will be returned. Values - AWS-Well-Architectedv2020-07-02, CIS-AWSv1.2.0, CIS-AWSv1.3.0, CIS-AWSv1.4.0, CIS-Azurev1.1.0, CIS-Azurev1.3.0, CIS-Dockerv1.2.0, CIS-Googlev1.1.0, CIS-Googlev1.2.0, CIS-Controlsv7.1, CSA-CCMv3.0.1, GDPRv2016, HIPAAv2013, ISO-27001v2013, NIST-800-53vRev4, PCI-DSSv3.2.1, SOC-2_v2017, FBP (AWS & AWS GovCloud only), Custom. The following values are deprecated - CIS, CISAZURE, CISCONTROLS, CISCONTROLSAZURE, CSACCM, GDPR, GDPRAZURE, HIPAA, HIPAAAZURE, ISO27001, ISO27001AZURE, NIST, NISTAZURE, PCI, PCIAZURE, SOC2, SOC2AZURE List of compliance results from a scan grouped by resource type. | 0 | 1 | |:-|:| | items | Array of objects (ComplianceByResourceType) [ items ] Paginated list of compliance results grouped by resource type. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | next_offset | integer Next offset to use to get the next page of items. | | count | integer Total number of items. | Paginated list of compliance results grouped by resource type. Indicates whether there are more items at the next offset. Next offset to use to get the next page of items. Total number of items. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Lists drift, enforcement, and compliance events for an environment. Example API request here. | 0 | 1 | |:|:-| | environment_id required | string Environment" }, { "data": "Learn how to find your environment ID. | | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | rangefrom | integer Earliest createdat time to return events from, Unix time. Learn how to convert to or from Unix time in the API User Guide. Supports relative time offsets in seconds; for example, to retrieve events from the last 24 hours, enter -86400. | | rangeto | integer Latest createdat time to return events from, Unix time. Learn how to convert to or from Unix time in the API User Guide. Supports relative time offsets in seconds; for example, to retrieve events between one and two hours ago, set rangefrom to -7200 and rangeto to -3600. | | event_type | Array of stringsItems Enum: \"DRIFT\" \"REMEDIATION\" \"COMPLIANCE\" Event type to filter by. When not specified, all event types will be returned. Values - DRIFT, REMEDIATION, COMPLIANCE | | change | Array of stringsItems Enum: \"ADDED\" \"MODIFIED\" \"REMOVED\" Filter drift or enforcement results for an event by type of change. When not specified, all change types will be returned. Values - ADDED, MODIFIED, REMOVED | | remediated | Array of stringsItems Enum: \"SUCCESS\" \"FAIL\" Filter enforcement results for an event by SUCCESS or FAIL. When not specified, all enforcement results will be returned. | | resourcetype | Array of strings Resource types in the event to filter by. When not specified, all resource types will be returned. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. | Environment ID. Learn how to find your environment ID. Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Earliest created_at time to return events from, Unix time. Learn how to convert to or from Unix time in the API User Guide. Supports relative time offsets in seconds; for example, to retrieve events from the last 24 hours, enter -86400. Latest createdat time to return events from, Unix time. Learn how to convert to or from Unix time in the API User Guide. Supports relative time offsets in seconds; for example, to retrieve events between one and two hours ago, set rangefrom to -7200 and range_to to -3600. Event type to filter by. When not specified, all event types will be returned. Values - DRIFT, REMEDIATION, COMPLIANCE Filter drift or enforcement results for an event by type of change. When not specified, all change types will be returned. Values - ADDED, MODIFIED, REMOVED Filter enforcement results for an event by SUCCESS or FAIL. When not specified, all enforcement results will be returned. Resource types in the event to filter by. When not specified, all resource types will be returned. Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. List of drift, remediation, and compliance events. | 0 | 1 | |:-|:| | items | Array of objects (Event) [ items ] Paginated list of events. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | next_offset | integer Next offset to use to get the next page of" }, { "data": "| | count | integer Total number of items. DEPRECATED: This property no longer returns accurate counts when filters are applied and will be removed in future API versions | Paginated list of events. Indicates whether there are more items at the next offset. Next offset to use to get the next page of items. Total number of items. DEPRECATED: This property no longer returns accurate counts when filters are applied and will be removed in future API versions BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Returns the OpenAPI 2.0 specification for this API. Example API request here. OpenAPI 2.0 specification. InternalServerError Returns a user-friendly interface for the OpenAPI 2.0 specification for this API. Note - Users should visit the Swagger UI instead. The Swagger UI | 0 | 1 | |:--|:-| | Access-Control-Allow-Headers | string | | Access-Control-Allow-Methods | string | | Access-Control-Allow-Origin | string | | Content-Type | string | Returns the permissions required to survey and enforce resources (aws and aws_govcloud only). Example API request here. | 0 | 1 | |:|:--| | provider required | string Enum: \"aws\" \"awsgovcloud\" Name of the cloud provider. Values - aws, awsgovcloud | Name of the cloud provider. Values - aws, aws_govcloud List of resource types to be able to survey and enforce. Find resource types with GET /metadata/{provider}/resource_types or see Service Coverage. | 0 | 1 | |:-|:| | surveyresourcetypes | Array of strings List of resource types to be able to survey. Find resource types with GET /metadata/{provider}/resource_types or see Service Coverage. | | remediateresourcetypes | Array of strings List of resource types to be able to enforce. Find resource types with GET /metadata/{provider}/resource_types or see Service Coverage. | List of resource types to be able to survey. Find resource types with GET /metadata/{provider}/resource_types or see Service Coverage. List of resource types to be able to enforce. Find resource types with GET /metadata/{provider}/resource_types or see Service Coverage. Permissions for surveying and enforcing the specified resource types. | 0 | 1 | |:-|:-| | aws | object (PermissionsAws) AWS and AWS GovCloud IAM policy required for surveying and enforcing the desired resource types. | AWS and AWS GovCloud IAM policy required for surveying and enforcing the desired resource types. BadRequestError AuthenticationError AuthorizationError InternalServerError Lists the resource types supported by Fugue. Example API request here. | 0 | 1 | |:|:--| | provider required | string Enum: \"aws\" \"awsgovcloud\" \"azure\" \"google\" Name of the cloud provider. Values - aws, awsgovcloud, azure. (For Azure Government, use azure.) | Name of the cloud provider. Values - aws, aws_govcloud, azure. (For Azure Government, use azure.) | 0 | 1 | |:|:--| | region | string The AWS region for which to return resource types. Required if provider is aws or aws_govcloud. Values - see API User Guide | | beta_resources | boolean Indicates whether resource types in beta will be returned. | The AWS region for which to return resource types. Required if provider is aws or aws_govcloud. Values - see API User Guide Indicates whether resource types in beta will be returned. List of supported resource types. | 0 | 1 | |:|:--| | resource_types | Array of strings List of resource types supported by Fugue. | | recommendedtypes | Array of strings List of Fugue-recommended resource types (aws and awsgovcloud only) | List of resource types supported by Fugue. List of Fugue-recommended resource types (aws and aws_govcloud only) BadRequestError AuthenticationError AuthorizationError InternalServerError Lists details for all notifications. Example API request here. Learn more about" }, { "data": "| 0 | 1 | |:-|:-| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. List of notification details. | 0 | 1 | |:-|:-| | count | integer Count of all found notifications. | | next_offset | integer Next offset to use to get the next page of items. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | items | Array of objects (Notification) [ items ] List of notification configurations. | Count of all found notifications. Next offset to use to get the next page of items. Indicates whether there are more items at the next offset. List of notification configurations. BadRequestError AuthenticationError AuthorizationError InternalServerError Creates a new notification. Example API request here. If you want the notification to use a manually created SNS topic, update the topic's access policy with the access policy here and replace the variables with your own region, account ID, and topic name. Learn more about notifications. Configuration options for the new notification. | 0 | 1 | |:-|:--| | name | string Human readable name of the notification. | | events | Array of strings List of events the notification is triggered on. Values - compliance, drift, remediation | | environments | Array of strings List of environment IDs the notification is attached to. Learn how to find environment IDs. | | emails | Array of strings List of email addresses the notification is delivered to. | | topic_arn | string AWS SNS topic ARN the notification is delivered to. Copy the SNS topic access policy here and replace the variables with your own region, account ID, and topic name. | Human readable name of the notification. List of events the notification is triggered on. Values - compliance, drift, remediation List of environment IDs the notification is attached to. Learn how to find environment IDs. List of email addresses the notification is delivered to. AWS SNS topic ARN the notification is delivered to. Copy the SNS topic access policy here and replace the variables with your own region, account ID, and topic name. New notification details. | 0 | 1 | |:-|:| | notification_id | string ID of the notification. | | name | string Human readable name of the notification. | | events | Array of strings List of events the notification is triggered on. Values - compliance, drift, remediation | | environments | Array of objects[ items ] The corresponding environment IDs and names the notification is attached to. | | emails | Array of strings List of email addresses the notification is delivered to. | | topic_arn | string AWS SNS topic ARN the notification is delivered to. Learn about the SNS topic and its access policy here. | | last_error | string Last error recorded while processing notification. If the last notification processed had no error this field will be empty. | | created_by | string Internal ID of the principal that created the notification. | | created_at | integer When the notification was created, Unix time. Learn how to convert to or from Unix time in the API User" }, { "data": "| | updated_by | string Internal ID of the principal that last updated the notification. | | updated_at | integer When the notification was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. | ID of the notification. Human readable name of the notification. List of events the notification is triggered on. Values - compliance, drift, remediation The corresponding environment IDs and names the notification is attached to. List of email addresses the notification is delivered to. AWS SNS topic ARN the notification is delivered to. Learn about the SNS topic and its access policy here. Last error recorded while processing notification. If the last notification processed had no error this field will be empty. Internal ID of the principal that created the notification. When the notification was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. Internal ID of the principal that last updated the notification. When the notification was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Updates an existing notification. Example API request here. If you want the notification to use a manually created SNS topic, update the topic's access policy with the access policy here and replace the variables with your own region, account ID, and topic name. Learn more about notifications. | 0 | 1 | |:-|:--| | notification_id required | string Notification ID. Find your notification ID via GET /notifications. | Notification ID. Find your notification ID via GET /notifications. New configuration options for the notification. | 0 | 1 | |:-|:--| | name | string Human readable name of the notification. | | events | Array of strings List of events the notification is triggered on. Values - compliance, drift, remediation | | environments | Array of strings List of environment IDs the notification is attached to. Learn how to find environment IDs. | | emails | Array of strings List of email addresses the notification is delivered to. | | topic_arn | string AWS SNS topic ARN the notification is delivered to. Copy the SNS topic access policy here and replace the variables with your own region, account ID, and topic name. | Human readable name of the notification. List of events the notification is triggered on. Values - compliance, drift, remediation List of environment IDs the notification is attached to. Learn how to find environment IDs. List of email addresses the notification is delivered to. AWS SNS topic ARN the notification is delivered to. Copy the SNS topic access policy here and replace the variables with your own region, account ID, and topic name. New notification details. | 0 | 1 | |:-|:| | notification_id | string ID of the notification. | | name | string Human readable name of the notification. | | events | Array of strings List of events the notification is triggered on. Values - compliance, drift, remediation | | environments | Array of objects[ items ] The corresponding environment IDs and names the notification is attached to. | | emails | Array of strings List of email addresses the notification is delivered to. | | topic_arn | string AWS SNS topic ARN the notification is delivered to. Learn about the SNS topic and its access policy here. | | last_error | string Last error recorded while processing" }, { "data": "If the last notification processed had no error this field will be empty. | | created_by | string Internal ID of the principal that created the notification. | | created_at | integer When the notification was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | updated_by | string Internal ID of the principal that last updated the notification. | | updated_at | integer When the notification was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. | ID of the notification. Human readable name of the notification. List of events the notification is triggered on. Values - compliance, drift, remediation The corresponding environment IDs and names the notification is attached to. List of email addresses the notification is delivered to. AWS SNS topic ARN the notification is delivered to. Learn about the SNS topic and its access policy here. Last error recorded while processing notification. If the last notification processed had no error this field will be empty. Internal ID of the principal that created the notification. When the notification was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. Internal ID of the principal that last updated the notification. When the notification was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Deletes a notification. Example API request here. Learn more about notifications. | 0 | 1 | |:-|:--| | notification_id required | string Notification ID. Find your notification ID via GET /notifications. | Notification ID. Find your notification ID via GET /notifications. Notification deleted. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Create a new custom rule. Example API request here. Learn more about custom rules. Configuration options for the new custom rule. | 0 | 1 | |:--|:-| | name | string Human readable name of the custom rule. | | source | string Enum: \"FUGUE\" \"CUSTOM\" The origin of this rule. Values - CUSTOM (the FUGUE value is for internal use only) | | description | string Description of the custom rule | | provider | string Enum: \"AWS\" \"AWS_GOVCLOUD\" \"AZURE\" \"GOOGLE\" Provider of the custom rule. Deprecated; please use \"providers\" | | providers | Array of stringsItems Enum: \"AWS\" \"AWSGOVCLOUD\" \"AZURE\" \"GOOGLE\" \"REPOSITORY\" Providers for the custom rule. Values - AWS, AWSGOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. | | severity | string Enum: \"Informational\" \"Low\" \"Medium\" \"High\" \"Critical\" Severity level of the custom rule. See Rule Severity Definitions. Default - High. Values - Informational, Low, Medium, High, Critical | | resourcetype | string Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Advanced rules must use the value MULTIPLE | | rule_text | string The Rego source code for the rule. | Human readable name of the custom rule. The origin of this rule. Values - CUSTOM (the FUGUE value is for internal use only) Description of the custom rule Provider of the custom rule. Deprecated; please use \"providers\" Providers for the custom rule. Values - AWS, AWS_GOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. Severity level of the custom rule. See Rule Severity Definitions. Default -" }, { "data": "Values - Informational, Low, Medium, High, Critical Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Advanced rules must use the value MULTIPLE The Rego source code for the rule. New custom rule details. | 0 | 1 | |:|:-| | id | string ID of the custom rule. | | name | string Human readable name of the custom rule. | | source | string Value: \"CUSTOM\" The origin of this rule. Values - CUSTOM | | description | string Description of the custom rule. | | provider | string Enum: \"AWS\" \"AWS_GOVCLOUD\" \"AZURE\" \"GOOGLE\" Provider of the custom rule. Deprecated; please use \"providers\" | | providers | Array of stringsItems Enum: \"AWS\" \"AWSGOVCLOUD\" \"AZURE\" \"GOOGLE\" \"REPOSITORY\" Providers for the custom rule. Values - AWS, AWSGOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. | | resourcetype | string Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. | | tfresourcetype | string Terraform resource type corresponding to resource_type. | | severity | string Enum: \"Informational\" \"Low\" \"Medium\" \"High\" \"Critical\" Severity level of the custom rule. See Rule Severity Definitions. Values - Informational, Low, Medium, High, Critical | | compliance_controls | Array of strings Compliance controls to which the custom rule belongs. | | families | Array of strings Families to which the custom rule belongs. | | status | string Enum: \"ENABLED\" \"DISABLED\" \"INVALID\" The current status of the rule. Values - ENABLED, DISABLED, INVALID | | rule_text | string The Rego source code for the rule. | | created_by | string Principal that created the rule. | | createdbydisplay_name | string Display name of the user that created the rule. | | created_at | integer The date and time the rule was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | updated_by | string Principal that last updated the rule. | | updatedbydisplay_name | string Display name of the user that last updated the rule. | | updated_at | integer The date and time the rule was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | errors | Array of objects (CustomRuleError) [ items ] Syntax errors in the rego source code. | ID of the custom rule. Human readable name of the custom rule. The origin of this rule. Values - CUSTOM Description of the custom rule. Provider of the custom rule. Deprecated; please use \"providers\" Providers for the custom rule. Values - AWS, AWS_GOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Terraform resource type corresponding to resource_type. Severity level of the custom rule. See Rule Severity Definitions. Values - Informational, Low, Medium, High, Critical Compliance controls to which the custom rule belongs. Families to which the custom rule belongs. The current status of the rule. Values - ENABLED, DISABLED, INVALID The Rego source code for the rule. Principal that created the rule. Display name of the user that created the" }, { "data": "The date and time the rule was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. Principal that last updated the rule. Display name of the user that last updated the rule. The date and time the rule was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. Syntax errors in the rego source code. BadRequestError AuthenticationError AuthorizationError InternalServerError Returns a list of custom rules. Example API request here. Learn more about custom rules. | 0 | 1 | |:-|:-| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | status | string Enum: \"ENABLED\" \"DISABLED\" \"INVALID\" Filter on rule status. | | query | string [ 2 .. 1800 ] characters A stringified JSON array of search parameters. | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Filter on rule status. A stringified JSON array of search parameters. List of custom rules. | 0 | 1 | |:-|:-| | count | integer Total number of custom rules. | | next_offset | integer Next offset to use to get the next page of items. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | items | Array of objects (CustomRule) [ items ] List of custom rules. | Total number of custom rules. Next offset to use to get the next page of items. Indicates whether there are more items at the next offset. List of custom rules. BadRequestError AuthenticationError AuthorizationError InternalServerError Retrieves details on a single custom rule. Example API request here. Learn more about custom rules. | 0 | 1 | |:--|:-| | rule_id required | string The ID of the rule to get. | The ID of the rule to get. Custom rule details. | 0 | 1 | |:|:-| | id | string ID of the custom rule. | | name | string Human readable name of the custom rule. | | source | string Value: \"CUSTOM\" The origin of this rule. Values - CUSTOM | | description | string Description of the custom rule. | | provider | string Enum: \"AWS\" \"AWS_GOVCLOUD\" \"AZURE\" \"GOOGLE\" Provider of the custom rule. Deprecated; please use \"providers\" | | providers | Array of stringsItems Enum: \"AWS\" \"AWSGOVCLOUD\" \"AZURE\" \"GOOGLE\" \"REPOSITORY\" Providers for the custom rule. Values - AWS, AWSGOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. | | resourcetype | string Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. | | tfresourcetype | string Terraform resource type corresponding to resource_type. | | severity | string Enum: \"Informational\" \"Low\" \"Medium\" \"High\" \"Critical\" Severity level of the custom rule. See Rule Severity Definitions. Values - Informational, Low, Medium, High, Critical | | compliance_controls | Array of strings Compliance controls to which the custom rule belongs. | | families | Array of strings Families to which the custom rule belongs. | | status | string Enum: \"ENABLED\" \"DISABLED\" \"INVALID\" The current status of the" }, { "data": "Values - ENABLED, DISABLED, INVALID | | rule_text | string The Rego source code for the rule. | | created_by | string Principal that created the rule. | | createdbydisplay_name | string Display name of the user that created the rule. | | created_at | integer The date and time the rule was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | updated_by | string Principal that last updated the rule. | | updatedbydisplay_name | string Display name of the user that last updated the rule. | | updated_at | integer The date and time the rule was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. | ID of the custom rule. Human readable name of the custom rule. The origin of this rule. Values - CUSTOM Description of the custom rule. Provider of the custom rule. Deprecated; please use \"providers\" Providers for the custom rule. Values - AWS, AWS_GOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Terraform resource type corresponding to resource_type. Severity level of the custom rule. See Rule Severity Definitions. Values - Informational, Low, Medium, High, Critical Compliance controls to which the custom rule belongs. Families to which the custom rule belongs. The current status of the rule. Values - ENABLED, DISABLED, INVALID The Rego source code for the rule. Principal that created the rule. Display name of the user that created the rule. The date and time the rule was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. Principal that last updated the rule. Display name of the user that last updated the rule. The date and time the rule was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Updates configuration of a custom rule. Example API request here. Learn more about custom rules. | 0 | 1 | |:--|:-| | rule_id required | string The ID of the rule to update. | The ID of the rule to update. New configuration options for the custom rule. | 0 | 1 | |:--|:--| | name | string Human readable name of the custom rule. | | description | string Description of the custom rule. | | providers | Array of stringsItems Enum: \"AWS\" \"AWSGOVCLOUD\" \"AZURE\" \"GOOGLE\" \"REPOSITORY\" Providers for the custom rule. Values - AWS, AWSGOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. | | status | string Enum: \"ENABLED\" \"DISABLED\" Status of the custom rule. Values - ENABLED, DISABLED | | resourcetype required | string REQUIRED. Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Advanced rules must use the value MULTIPLE | | rule_text required | string REQUIRED. Rego code used by the rule. | | severity | string Enum: \"Informational\" \"Low\" \"Medium\" \"High\" \"Critical\" Severity level of the custom rule. See Rule Severity" }, { "data": "Values - Informational, Low, Medium, High, Critical | | families | Array of strings Families to which the custom rule belongs | Human readable name of the custom rule. Description of the custom rule. Providers for the custom rule. Values - AWS, AWS_GOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. Status of the custom rule. Values - ENABLED, DISABLED REQUIRED. Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Advanced rules must use the value MULTIPLE REQUIRED. Rego code used by the rule. Severity level of the custom rule. See Rule Severity Definitions. Values - Informational, Low, Medium, High, Critical Families to which the custom rule belongs New custom rule details. | 0 | 1 | |:|:-| | id | string ID of the custom rule. | | name | string Human readable name of the custom rule. | | source | string Value: \"CUSTOM\" The origin of this rule. Values - CUSTOM | | description | string Description of the custom rule. | | provider | string Enum: \"AWS\" \"AWS_GOVCLOUD\" \"AZURE\" \"GOOGLE\" Provider of the custom rule. Deprecated; please use \"providers\" | | providers | Array of stringsItems Enum: \"AWS\" \"AWSGOVCLOUD\" \"AZURE\" \"GOOGLE\" \"REPOSITORY\" Providers for the custom rule. Values - AWS, AWSGOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. | | resourcetype | string Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. | | tfresourcetype | string Terraform resource type corresponding to resource_type. | | severity | string Enum: \"Informational\" \"Low\" \"Medium\" \"High\" \"Critical\" Severity level of the custom rule. See Rule Severity Definitions. Values - Informational, Low, Medium, High, Critical | | compliance_controls | Array of strings Compliance controls to which the custom rule belongs. | | families | Array of strings Families to which the custom rule belongs. | | status | string Enum: \"ENABLED\" \"DISABLED\" \"INVALID\" The current status of the rule. Values - ENABLED, DISABLED, INVALID | | rule_text | string The Rego source code for the rule. | | created_by | string Principal that created the rule. | | createdbydisplay_name | string Display name of the user that created the rule. | | created_at | integer The date and time the rule was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | updated_by | string Principal that last updated the rule. | | updatedbydisplay_name | string Display name of the user that last updated the rule. | | updated_at | integer The date and time the rule was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. | | errors | Array of objects (CustomRuleError) [ items ] Syntax errors in the rego source code. | ID of the custom rule. Human readable name of the custom rule. The origin of this rule. Values - CUSTOM Description of the custom rule. Provider of the custom rule. Deprecated; please use \"providers\" Providers for the custom rule. Values - AWS, AWS_GOVCLOUD, AZURE, GOOGLE, REPOSITORY. (For Azure Government, use AZURE.) Accepts a list of values. See Custom Rule Reference. Resource type to which the custom rule" }, { "data": "Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. Terraform resource type corresponding to resource_type. Severity level of the custom rule. See Rule Severity Definitions. Values - Informational, Low, Medium, High, Critical Compliance controls to which the custom rule belongs. Families to which the custom rule belongs. The current status of the rule. Values - ENABLED, DISABLED, INVALID The Rego source code for the rule. Principal that created the rule. Display name of the user that created the rule. The date and time the rule was created, Unix time. Learn how to convert to or from Unix time in the API User Guide. Principal that last updated the rule. Display name of the user that last updated the rule. The date and time the rule was last updated, Unix time. Learn how to convert to or from Unix time in the API User Guide. Syntax errors in the rego source code. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Deletes a specified custom rule. Example API request here. Learn more about custom rules. | 0 | 1 | |:--|:-| | rule_id required | string The ID of the rule to delete. | The ID of the rule to delete. Custom rule deleted. BadRequestError AuthenticationError AuthorizationError InternalServerError Tests a custom rule using state from an scan. Example API request here. Learn more about custom rules. | 0 | 1 | |:-|:-| | via_download | boolean Force output to be downloadable. The API returns a presigned link to download a JSON document containing test results. Useful when testing a very large number of resources. | Force output to be downloadable. The API returns a presigned link to download a JSON document containing test results. Useful when testing a very large number of resources. Information about the custom rule to be tested. | 0 | 1 | |:-|:| | resourcetype | string Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resourcetypes or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. | | rule_text required | string The rego source code for the rule. | | scan_id required | string Scan to test the custom rule with. Learn how to find your scan ID. | Resource type to which the custom rule applies. Find resource types with GET /metadata/{provider}/resource_types or see our service coverage pages for AWS & AWS GovCloud and Azure & Azure Government. The rego source code for the rule. Scan to test the custom rule with. Learn how to find your scan ID. Validation results for the custom rule. | 0 | 1 | |:-|:-| | errors | Array of objects (CustomRuleError) [ items ] | | result | string Enum: \"PASS\" \"FAIL\" \"UNKNOWN\" Result of testing custom rule. Values - PASS, FAIL, UNKNOWN | | resources | Array of objects (TestCustomRuleOutputResource) [ items ] | | links | object A presigned link to a downloadable JSON document containing test results. | Result of testing custom rule. Values - PASS, FAIL, UNKNOWN A presigned link to a downloadable JSON document containing test results. BadRequestError AuthenticationError AuthorizationError InternalServerError Get the input against which a custom rule would be tested. Example API request here. Learn more about custom rules. | 0 | 1 | |:--|:--| | via_download | boolean Force output to be downloadable. The API returns a presigned link to download a JSON document containing test input. Useful when testing a very large number of resources. | | scan_id required | string Scan ID for the custom rule test" }, { "data": "| Force output to be downloadable. The API returns a presigned link to download a JSON document containing test input. Useful when testing a very large number of resources. Scan ID for the custom rule test input. Input used for the custom rule. | 0 | 1 | |:-|:| | resources | object | | links | object A presigned link to a downloadable JSON document containing test results. | A presigned link to a downloadable JSON document containing test results. BadRequestError AuthenticationError AuthorizationError InternalServerError Create a new custom compliance family. Configuration options for the new custom compliance family. | 0 | 1 | |:|:| | name | string Human readable name of the family | | description | string Description of the family | | recommended | boolean If the family is recommended for all new environments | | always_enabled | boolean If the family will automatically be enabled on all environments within the tenant | | rule_ids | Array of strings List of rule ids to associate with family | Human readable name of the family Description of the family If the family is recommended for all new environments If the family will automatically be enabled on all environments within the tenant List of rule ids to associate with family The newly-created custom compliance family. | 0 | 1 | |:|:| | id | string ID of the family | | tenant_id | string ID of the tenant | | name | string Human readable name of the family | | source | string Enum: \"FUGUE\" \"CUSTOM\" The origin of this family | | description | string Description of the family | | providers | Array of strings Provider the families applies to | | recommended | boolean If the family is recommended for all new environments | | always_enabled | boolean If the family will automatically be enabled on all environments within the tenant | | rule_ids | Array of strings List of rule ids associated with family | | created_by | string Principal that created the rule. | | createdbydisplay_name | string Display name of the user that created the rule | | created_at | integer The date and time the rule was created. | | updated_by | string Principal that last updated the rule. | | updatedbydisplay_name | string Display name of the user that last updated the rule | | updated_at | integer The date and time the rule was last updated. | ID of the family ID of the tenant Human readable name of the family The origin of this family Description of the family Provider the families applies to If the family is recommended for all new environments If the family will automatically be enabled on all environments within the tenant List of rule ids associated with family Principal that created the rule. Display name of the user that created the rule The date and time the rule was created. Principal that last updated the rule. Display name of the user that last updated the rule The date and time the rule was last updated. BadRequestError AuthenticationError AuthorizationError InternalServerError Return a list of compliance families. | 0 | 1 | |:-|:-| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 10 Maximum number of items to" }, { "data": "| | order_by | string Default: \"name\" Enum: \"name\" \"policy\" Attribute to order families by | | order_direction | string Default: \"asc\" Enum: \"asc\" \"desc\" Order families in an ascending or descending fashion | | query | string [ 2 .. 1800 ] characters A stringified JSON array of search parameters. | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Attribute to order families by Order families in an ascending or descending fashion A stringified JSON array of search parameters. List of compliance families. | 0 | 1 | |:-|:| | items | Array of objects (Family) [ items ] | | count | integer | | is_truncated | boolean | | next_offset | integer | BadRequestError AuthenticationError AuthorizationError InternalServerError Return a specific Family. | 0 | 1 | |:-|:-| | family_id required | string The id of the Family to look up. | The id of the Family to look up. The desired Family. | 0 | 1 | |:|:| | id | string ID of the family | | tenant_id | string ID of the tenant | | name | string Human readable name of the family | | source | string Enum: \"FUGUE\" \"CUSTOM\" The origin of this family | | description | string Description of the family | | providers | Array of strings Provider the families applies to | | recommended | boolean If the family is recommended for all new environments | | always_enabled | boolean If the family will automatically be enabled on all environments within the tenant | | rule_ids | Array of strings List of rule ids associated with family | | created_by | string Principal that created the rule. | | createdbydisplay_name | string Display name of the user that created the rule | | created_at | integer The date and time the rule was created. | | updated_by | string Principal that last updated the rule. | | updatedbydisplay_name | string Display name of the user that last updated the rule | | updated_at | integer The date and time the rule was last updated. | ID of the family ID of the tenant Human readable name of the family The origin of this family Description of the family Provider the families applies to If the family is recommended for all new environments If the family will automatically be enabled on all environments within the tenant List of rule ids associated with family Principal that created the rule. Display name of the user that created the rule The date and time the rule was created. Principal that last updated the rule. Display name of the user that last updated the rule The date and time the rule was last updated. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Delete a Family. | 0 | 1 | |:-|:| | family_id required | string The id of the Family to delete. | The id of the Family to delete. Family deleted. BadRequestError AuthenticationError AuthorizationError InternalServerError Edit an existing user Family. | 0 | 1 | |:-|:| | family_id required | string The id of the Family to update. | The id of the Family to update. New configuration options for the" }, { "data": "| 0 | 1 | |:|:| | name | string Human readable name of the family | | description | string Description of the family | | recommended | boolean If the family is recommended for all new environments | | always_enabled | boolean If the family will automatically be enabled on all environments within the tenant | | rule_ids | Array of strings List of rule ids to associate with family | Human readable name of the family Description of the family If the family is recommended for all new environments If the family will automatically be enabled on all environments within the tenant List of rule ids to associate with family The updated Family. | 0 | 1 | |:|:| | id | string ID of the family | | tenant_id | string ID of the tenant | | name | string Human readable name of the family | | source | string Enum: \"FUGUE\" \"CUSTOM\" The origin of this family | | description | string Description of the family | | providers | Array of strings Provider the families applies to | | recommended | boolean If the family is recommended for all new environments | | always_enabled | boolean If the family will automatically be enabled on all environments within the tenant | | rule_ids | Array of strings List of rule ids associated with family | | created_by | string Principal that created the rule. | | createdbydisplay_name | string Display name of the user that created the rule | | created_at | integer The date and time the rule was created. | | updated_by | string Principal that last updated the rule. | | updatedbydisplay_name | string Display name of the user that last updated the rule | | updated_at | integer The date and time the rule was last updated. | ID of the family ID of the tenant Human readable name of the family The origin of this family Description of the family Provider the families applies to If the family is recommended for all new environments If the family will automatically be enabled on all environments within the tenant List of rule ids associated with family Principal that created the rule. Display name of the user that created the rule The date and time the rule was created. Principal that last updated the rule. Display name of the user that last updated the rule The date and time the rule was last updated. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError List details for all invites. | 0 | 1 | |:-|:-| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | order_direction | string Default: \"desc\" Enum: \"asc\" \"desc\" Direction to sort the items in. | | email | string Used to filter list to a single invite by email. | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Direction to sort the items in. Used to filter list to a single invite by email. Invite details. | 0 | 1 | |:-|:-| | items | Array of objects (Invite) [ items ] Paginated list of invites. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | next_offset | integer Next offset to use to get the next page of items. | | count | integer Total number of items. | Paginated list of invites. Indicates whether there are more items at the next" }, { "data": "Next offset to use to get the next page of items. Total number of items. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Creates a new invite. Configuration options for the new invite. | 0 | 1 | |:-|:-| | email required | string | | group_ids required | Array of strings | | expires | boolean Default: true | New invite details. | 0 | 1 | |:--|:-| | id required | string | | resource_type | string | | email required | string | | status required | string | | created_at required | integer | | updated_at | integer | | expires_at required | integer | | groups | object Map from group id to name. | Map from group id to name. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Fetch an invite by id. | 0 | 1 | |:-|:| | invite_id required | string id of the invite | id of the invite Invite details. | 0 | 1 | |:--|:-| | id required | string | | resource_type | string | | email required | string | | status required | string | | created_at required | integer | | updated_at | integer | | expires_at required | integer | | groups | object Map from group id to name. | Map from group id to name. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Return a list of groups. | 0 | 1 | |:-|:-| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 10 Maximum number of items to return. | | order_by | string Default: \"name\" Enum: \"name\" \"policy\" Attribute to order groups by | | order_direction | string Default: \"asc\" Enum: \"asc\" \"desc\" Order groups in an ascending or descending fashion | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Attribute to order groups by Order groups in an ascending or descending fashion List of Groups. | 0 | 1 | |:-|:--| | items | Array of objects (Group) [ items ] | | count | integer | | is_truncated | boolean | | next_offset | integer | BadRequestError AuthenticationError AuthorizationError InternalServerError Creates a new group. Configuration options for the new group. | 0 | 1 | |:-|:--| | name | string | | policy | string Enum: \"fugue:READONLY\" \"fugue:AUDITOR\" \"fugue:EDITOR\" \"fugue:CONTRIBUTOR\" \"fugue:MANAGER\" \"fugue:ORGANIZATIONREPORTVIEWER\" \"fugue:IAC_SCANNER\" | | environment_ids | Array of strings List of environment IDs to grant the group access to. Use * to grant access to all current and future environments. | List of environment IDs to grant the group access to. Use * to grant access to all current and future environments. New group details. | 0 | 1 | |:-|:-| | id | string | | name | string | | environments | object Map from environment id to name. | | policy | string | | users | object Map from user_id to name. | | apiclients | object Map from apiclient_id to name. | Map from environment id to name. Map from user_id to name. Map from apiclientid to name. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Edit a list of Users' Group assignment. User and Group IDs to be" }, { "data": "| 0 | 1 | |:-|:--| | user_ids | Array of strings | | group_ids | Array of strings | User Group assignment updated. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError List details for all users. | 0 | 1 | |:-|:-| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | order_direction | string Default: \"desc\" Enum: \"asc\" \"desc\" Direction to sort the items in. | | email | string Used to filter list to a single invite by email. | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Direction to sort the items in. Used to filter list to a single invite by email. User details. | 0 | 1 | |:-|:-| | items | Array of objects (User) [ items ] Paginated list of users. | | is_truncated | boolean Indicates whether there are more items at the next offset. | | next_offset | integer Next offset to use to get the next page of items. | | count | integer Total number of items. | Paginated list of users. Indicates whether there are more items at the next offset. Next offset to use to get the next page of items. Total number of items. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Fetch a user by id. | 0 | 1 | |:--|:-| | user_id required | string id of the user | id of the user User details. | 0 | 1 | |:-|:-| | id required | string | | resource_type | string | | email required | string | | owner | boolean | | first_name | string | | last_name | string | | status required | string | | groups | object Map from group id to name. | Map from group id to name. BadRequestError AuthenticationError AuthorizationError NotFoundError InternalServerError Lists details for all rule waivers. Example API request here. Learn more about waivers here. | 0 | 1 | |:--|:-| | offset | integer >= 0 Default: 0 Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. | | max_items | integer [ 1 .. 100 ] Default: 100 Maximum number of items to return. | | order_by | string Default: \"name\" Value: \"name\" Field to sort the items by. | | order_direction | string Default: \"asc\" Enum: \"asc\" \"desc\" Direction to sort the items in. | | query | string [ 2 .. 1800 ] characters DEPRECATED. A stringified JSON array of search parameters. Use the q.<parameter> fields instead. | | q.id | Array of strings A specific rule waiver ID. | | q.environment_id | Array of strings An environment ID associated with a rule waiver. | | q.name | Array of strings A name of a rule waiver. | | q.rule_id | Array of strings A rule ID associated with a rule waiver. | | q.resource_id | Array of strings A resource ID associated with a rule waiver. | | q.resource_type | Array of strings A resource ID associated with a rule waiver. | | q.resource_provider | Array of strings A resource provider associated with a rule waiver | | q.environment_name | Array of strings An environment name associated with a rule waiver. | | q.environment_provider | Array of strings An environment provider associated with a rule waiver | |" }, { "data": "| Array of strings Alias for q.environment_provider. | | q.status | Array of strings A current waiver status. | Number of items to skip before returning. This parameter is used when the number of items spans multiple pages. Maximum number of items to return. Field to sort the items by. Direction to sort the items in. DEPRECATED. A stringified JSON array of search parameters. Use the q.<parameter> fields instead. A specific rule waiver ID. An environment ID associated with a rule waiver. A name of a rule waiver. A rule ID associated with a rule waiver. A resource ID associated with a rule waiver. A resource ID associated with a rule waiver. A resource provider associated with a rule waiver An environment name associated with a rule waiver. An environment provider associated with a rule waiver Alias for q.environment_provider. A current waiver status. List of rule waivers | 0 | 1 | |:-|:-| | items | Array of objects (RuleWaiver) [ items ] | | count | integer | | is_truncated | boolean | | next_offset | integer | AuthenticationError AuthorizationError InternalServerError Creates a new rule waiver. Example API request here. Learn more about waivers here. Rule waiver parameters | 0 | 1 | |:|:--| | environment_id required | string | | resource_provider required | string | | resource_type required | string | | resource_id required | string | | resource_tag | string | | rule_id required | string | | name required | string | | comment | string | | expiresat | integer Unix timestamp representation of the expiration date of this rule waiver. Cannot be combined with expiresatduration or expiresat_ts. Learn how to convert to or from Unix time in the API User Guide. | | expiresatduration | object (Duration) Defines a duration from the current time and date. | | expiresatts | string RFC3339 representation of the expiration date of this rule waiver. Cannot be combined with expiresat or expiresat_duration. Example: 2022-12-19 16:39:57-08:00 | Unix timestamp representation of the expiration date of this rule waiver. Cannot be combined with expiresatduration or expiresatts. Learn how to convert to or from Unix time in the API User Guide. Defines a duration from the current time and date. RFC3339 representation of the expiration date of this rule waiver. Cannot be combined with expiresat or expiresat_duration. Example: 2022-12-19 16:39:57-08:00 New rule waiver details. | 0 | 1 | |:|:-| | id required | string | | environment_id required | string | | environment_name | string | | resource_provider required | string | | resource_type required | string | | resource_id required | string Resource ID that waiver applies to. Can contain * or ? wildcard characters. | | resource_tag | string Tag key and tag value, separated by ':' (e.g., 'env:prod', 'env:', ''). Absence of tag value means all values apply. Supports ``, `?`, and `:` wildcards (globbing patterns). To fully match a string and ignore the wildcards use backticks '`'. For example, if you have a tag with `{ 'key1': 'value1:value?'}` and it can be matched with: ``, `:`, 'key1:', 'key1:value1\\:', 'key1:value1\\:value\\?', or `key1:`value1:value?``. | | rule_id required | string | | name required | string | | comment | string | | created_by | string Principal that created the rule waiver. | | createdbydisplay_name | string Display name of the user that created the rule waiver. | | created_at | integer The date and time when the rule waiver was" }, { "data": "| | expires_at | integer The date and time when this rule waiver expires. | | status | string Enum: \"ACTIVE\" \"EXPIRED\" Enum for whether or not this waiver is active or expired. | | updated_by | string Principal that last updated the rule waiver. | | updatedbydisplay_name | string Display name of the user that last updated the rule waiver. | | updated_at | integer The date and time when the rule waiver was last updated. | | rule_description | string Description of the rule. | | rulecompliancemapping | object Mapping of this rule in compliance families and their controls which are enabled in a given environment. | Resource ID that waiver applies to. Can contain * or ? wildcard characters. Tag key and tag value, separated by ':' (e.g., 'env:prod', 'env:', ''). Absence of tag value means all values apply. Supports ``, `?`, and `:` wildcards (globbing patterns). To fully match a string and ignore the wildcards use backticks '`'. For example, if you have a tag with `{ 'key1': 'value1:value?'}` and it can be matched with: ``, `:`, 'key1:', 'key1:value1\\:', 'key1:value1\\:value\\?', or `key1:`value1:value?``. Principal that created the rule waiver. Display name of the user that created the rule waiver. The date and time when the rule waiver was created. The date and time when this rule waiver expires. Enum for whether or not this waiver is active or expired. Principal that last updated the rule waiver. Display name of the user that last updated the rule waiver. The date and time when the rule waiver was last updated. Description of the rule. Mapping of this rule in compliance families and their controls which are enabled in a given environment. BadRequestError AuthenticationError AuthorizationError InternalServerError Retrieves details and rule metadata for a rule waiver. Example API request here. Learn more about waivers here. | 0 | 1 | |:|:| | rulewaiverid required | string The ID of rule waiver to update | The ID of rule waiver to update Rule waiver details. | 0 | 1 | |:|:-| | id required | string | | environment_id required | string | | environment_name | string | | resource_provider required | string | | resource_type required | string | | resource_id required | string Resource ID that waiver applies to. Can contain * or ? wildcard characters. | | resource_tag | string Tag key and tag value, separated by ':' (e.g., 'env:prod', 'env:', ''). Absence of tag value means all values apply. Supports ``, `?`, and `:` wildcards (globbing patterns). To fully match a string and ignore the wildcards use backticks '`'. For example, if you have a tag with `{ 'key1': 'value1:value?'}` and it can be matched with: ``, `:`, 'key1:', 'key1:value1\\:', 'key1:value1\\:value\\?', or `key1:`value1:value?``. | | rule_id required | string | | name required | string | | comment | string | | created_by | string Principal that created the rule waiver. | | createdbydisplay_name | string Display name of the user that created the rule waiver. | | created_at | integer The date and time when the rule waiver was created. | | expires_at | integer The date and time when this rule waiver expires. | | status | string Enum: \"ACTIVE\" \"EXPIRED\" Enum for whether or not this waiver is active or expired. | | updated_by | string Principal that last updated the rule waiver. | | updatedbydisplay_name | string Display name of the user that last updated the rule waiver. | | updated_at | integer The date and time when the rule waiver was last" }, { "data": "| | rule_description | string Description of the rule. | | rulecompliancemapping | object Mapping of this rule in compliance families and their controls which are enabled in a given environment. | Resource ID that waiver applies to. Can contain * or ? wildcard characters. Tag key and tag value, separated by ':' (e.g., 'env:prod', 'env:', ''). Absence of tag value means all values apply. Supports ``, `?`, and `:` wildcards (globbing patterns). To fully match a string and ignore the wildcards use backticks '`'. For example, if you have a tag with `{ 'key1': 'value1:value?'}` and it can be matched with: ``, `:`, 'key1:', 'key1:value1\\:', 'key1:value1\\:value\\?', or `key1:`value1:value?``. Principal that created the rule waiver. Display name of the user that created the rule waiver. The date and time when the rule waiver was created. The date and time when this rule waiver expires. Enum for whether or not this waiver is active or expired. Principal that last updated the rule waiver. Display name of the user that last updated the rule waiver. The date and time when the rule waiver was last updated. Description of the rule. Mapping of this rule in compliance families and their controls which are enabled in a given environment. AuthenticationError AuthorizationError NotFoundError InternalServerError Updates a rule waiver. Example API request here. Learn more about waivers here. | 0 | 1 | |:|:| | rulewaiverid required | string The ID of rule waiver to update | The ID of rule waiver to update Rule waiver update parameters | 0 | 1 | |:--|:--| | name | string | | comment | string | | expiresat | integer Unix timestamp representation of the expiration date of this rule waiver. Cannot be combined with expiresatduration or expiresat_ts. Learn how to convert to or from Unix time in the API User Guide. | | expiresatduration | object (Duration) Defines a duration from the current time and date. | | expiresatts | string RFC3339 representation of the expiration date of this rule waiver. Cannot be combined with expiresat or expiresat_duration. Example: 2022-12-19 16:39:57-08:00 | Unix timestamp representation of the expiration date of this rule waiver. Cannot be combined with expiresatduration or expiresatts. Learn how to convert to or from Unix time in the API User Guide. Defines a duration from the current time and date. RFC3339 representation of the expiration date of this rule waiver. Cannot be combined with expiresat or expiresat_duration. Example: 2022-12-19 16:39:57-08:00 Rule waiver details | 0 | 1 | |:|:-| | id required | string | | environment_id required | string | | environment_name | string | | resource_provider required | string | | resource_type required | string | | resource_id required | string Resource ID that waiver applies to. Can contain * or ? wildcard characters. | | resource_tag | string Tag key and tag value, separated by ':' (e.g., 'env:prod', 'env:', ''). Absence of tag value means all values apply. Supports ``, `?`, and `:` wildcards (globbing patterns). To fully match a string and ignore the wildcards use backticks '`'. For example, if you have a tag with `{ 'key1': 'value1:value?'}` and it can be matched with: ``, `:`, 'key1:', 'key1:value1\\:', 'key1:value1\\:value\\?', or `key1:`value1:value?``. | | rule_id required | string | | name required | string | | comment | string | | created_by | string Principal that created the rule waiver. | | createdbydisplay_name | string Display name of the user that created the rule" }, { "data": "| | created_at | integer The date and time when the rule waiver was created. | | expires_at | integer The date and time when this rule waiver expires. | | status | string Enum: \"ACTIVE\" \"EXPIRED\" Enum for whether or not this waiver is active or expired. | | updated_by | string Principal that last updated the rule waiver. | | updatedbydisplay_name | string Display name of the user that last updated the rule waiver. | | updated_at | integer The date and time when the rule waiver was last updated. | | rule_description | string Description of the rule. | | rulecompliancemapping | object Mapping of this rule in compliance families and their controls which are enabled in a given environment. | Resource ID that waiver applies to. Can contain * or ? wildcard characters. Tag key and tag value, separated by ':' (e.g., 'env:prod', 'env:', ''). Absence of tag value means all values apply. Supports ``, `?`, and `:` wildcards (globbing patterns). To fully match a string and ignore the wildcards use backticks '`'. For example, if you have a tag with `{ 'key1': 'value1:value?'}` and it can be matched with: ``, `:`, 'key1:', 'key1:value1\\:', 'key1:value1\\:value\\?', or `key1:`value1:value?``. Principal that created the rule waiver. Display name of the user that created the rule waiver. The date and time when the rule waiver was created. The date and time when this rule waiver expires. Enum for whether or not this waiver is active or expired. Principal that last updated the rule waiver. Display name of the user that last updated the rule waiver. The date and time when the rule waiver was last updated. Description of the rule. Mapping of this rule in compliance families and their controls which are enabled in a given environment. AuthenticationError AuthorizationError NotFoundError InternalServerError Deletes a rule waiver. Example API request here. Learn more about waivers here. | 0 | 1 | |:|:| | rulewaiverid required | string The ID of rule waiver to delete | The ID of rule waiver to delete Rule waiver deleted. AuthenticationError AuthorizationError NotFoundError InternalServerError Lists audit log events. | 0 | 1 | |:-|:| | max_items | integer [ 25 .. 999 ] Default: 100 Maximum number of items to return. | | order_direction | string Default: \"desc\" Enum: \"asc\" \"desc\" Direction to sort the items in. | | subjectkind | string The subject to filter on. Values - apiclient, environment, group, invite, notification, rule, rule_waiver, scan | | range_from | string <date-time> Earliest timestamp to include in results. format: ISO 8601. Example: 2021-03-16T02:00 | | range_to | string <date-time> Latest timestamp to include in results. format: ISO 8601. Example: 2021-03-18T23:00 | | next_token | string If set can be used to retrieve the next page of records. | Maximum number of items to return. Direction to sort the items in. The subject to filter on. Values - apiclient, environment, group, invite, notification, rule, rulewaiver, scan Earliest timestamp to include in results. format: ISO 8601. Example: 2021-03-16T02:00 Latest timestamp to include in results. format: ISO 8601. Example: 2021-03-18T23:00 If set can be used to retrieve the next page of records. List of audit log events. | 0 | 1 | |:--|:--| | items | Array of objects (AuditLogEvent) [ items ] List of audit log events | | next_token | string If set can be used to retrieve the next page of records. | List of audit log events If set can be used to retrieve the next page of records. Bad request error. Authentication error. Authorization error. Internal server error." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Keylime", "subcategory": "Security & Compliance" }
[ { "data": "Contents: Keylimes runtime integrity monitoring requires the set up of Linux IMA. More information about IMA in general can be found in the openSUSE Wiki. You should refer to your Linux Distributions documentation to enable IMA, but as a general guide most recent versions already have CONFIG_IMA toggled to Y as a value during Kernel compile. It is then just a case of deploying an ima-policy file. On a Fedora or Debian system, the file is located in /etc/ima/ima-policy. For configuration of your IMA policy, please refer to the IMA Documentation. Within Keylime we use the following for demonstration (found in demo/ima-policies/ima-policy-keylime): ``` dont_measure fsmagic=0x9fa0 dont_measure fsmagic=0x62656572 dont_measure fsmagic=0x64626720 dont_measure fsmagic=0x01021994 dont_measure fsmagic=0x858458f6 dont_measure fsmagic=0x73636673 dont_measure fsmagic=0xf97cff8c dont_measure fsmagic=0x27e0eb dont_measure fsmagic=0x794c7630 dontmeasure objtype=varlogt dontmeasure objtype=auditdlogt dontmeasure objtype=tmp_t measure func=BPRM_CHECK measure func=FILEMMAP mask=MAYEXEC measure func=MODULE_CHECK uid=0 ``` This default policy measures all executables in bprm_check and all files mmapped executable in file_mmap and module checks and skips several irrelevant files (logs, audit, tmp, etc). Once your ima-policy is in place, reboot your machine (or even better have it present in your image for first boot). You can then verify IMA is measuring your system: ``` PCR template-hash filedata-hash filename-hint 10 3c93cea361cd6892bc8b9e3458e22ce60ef2e632 ima-ng sha1:ac7dd11bf0e3bec9a7eb2c01e495072962fb9dfa boot_aggregate 10 3d1452eb1fcbe51ad137f3fc21d3cf4a7c2e625b ima-ng sha1:a212d835ca43d7deedd4ee806898e77eab53dafa /usr/lib/systemd/systemd 10 e213099a2bf6d88333446c5da617e327696f9eb4 ima-ng sha1:6da34b1b7d2ca0d5ca19e68119c262556a15171d /usr/lib64/ld-2.28.so 10 7efd8e2a3da367f2de74b26b84f20b37c692b9f9 ima-ng sha1:af78ea0b455f654e9237e2086971f367b6bebc5f /usr/lib/systemd/libsystemd-shared-239.so 10 784fbf69b54c99d4ae82c0be5fca365a8272414e ima-ng sha1:b0c601bf82d32ff9afa34bccbb7e8f052c48d64e /etc/ld.so.cache ``` A runtime policy in its most basic form is a set of golden cryptographic hashes of files un-tampered state or of keys that may be loaded onto keyrings for IMA verification. Keylime will load the runtime policy into the Keylime Verifier. Keylime will then poll tpm quotes to PCR 10 on the agents TPM and validate the agents file(s) state against the policy. If the object has been tampered with or an unexpected key was loaded onto a keyring, the hashes will not match and Keylime will place the agent into a failed state. Likewise, if any files invoke the actions stated in ima-policy that are not matched in the allowlist, keylime will place the agent into a failed state. Allowlists are contained in Keylime runtime policies - see below for more details. Runtime policies heavily depend on the IMA configuration and used files by the operating system. Keylime provides two helper scripts for getting started. Note Those scripts only provide a reference point to get started and not a complete solution. We encourage developers / users of Keylime to be creative and come up with their own process for securely creating and maintaining runtime policies. The first script generates a runtime policy from the initramfs, IMA log (just for the boot aggregate) and files located on the root filesystem of a running system. The createruntimepolicy.sh script is available here Run the script as follows: ``` ``` For more options see the help page" }, { "data": "``` Usage: $0 -o/--outputfile FILENAME [-a/--algo ALGO] [-x/--ramdisk-location PATH] [-y/--bootaggregate-location PATH] [-z/--rootfs-location PATH] [-e/--exclude_list FILENAME] [-s/--skip-path PATH]\" optional arguments: -a/--algo (checksum algorithmi to be used, default: sha1sum) -x/--ramdisk-location (path to initramdisk, default: /boot/, set to \"none\" to skip) -y/--bootaggregate-location (path for IMA log, used for boot aggregate extraction, default: /sys/kernel/security/ima/asciiruntime_measurements, set to \"none\" to skip) -z/--rootfs-location (path to root filesystem, default: /, cannot be skipped) -e/--exclude_list (filename containing a list of paths to be excluded (i.e., verifier will not try to match checksums), default: none) -s/--skip-path (comma-separated path list, files found there will not have checksums calculated, default: none) -h/--help show this message and exit ``` Note: note, you need the OpenSSL installed to have the sha*sum CLI executables available The resulting runtimepolicykeylime.json file can be directly used by keylime_tenant (option --runtime-policy) Warning Its best practice to create the runtime policy in a secure environment. Ideally, this should be on a fully encrypted, air gapped computer that is permanently isolated from the Internet. Disable all network cards and sign the runtime policy hash to ensure no tampering occurs when transferring to other machines. The second script allows the user to build more complex policies by providing options to include: keyring verification, IMA verification keys, generating allowlist from IMA measurement log and extending existing policies. A basic policy can be easily created by using a IMA measurement log from system: ``` keylimecreatepolicy -m /path/to/asciiruntimemeasurements -o runtime_policy.json ``` For more options see the help page keylimecreatepolicy -h: ``` usage: keylimecreatepolicy [-h] [-B BASEPOLICY] [-k] [-b] [-a ALLOWLIST] [-m IMAMEASUREMENTLIST] [-i IGNOREDKEYRINGS] [-o OUTPUT] [--no-hashes] [-A IMASIGNATUREKEYS] This is an experimental tool for adding items to a Keylime's IMA runtime policy options: -h, --help show this help message and exit -B BASEPOLICY, --base-policy BASEPOLICY Merge new data into the given JSON runtime policy -k, --keyrings Create keyrings policy entries -b, --ima-buf Process ima-buf entries other than those related to keyrings -a ALLOWLIST, --allowlist ALLOWLIST Use given plain-text allowlist -m IMAMEASUREMENTLIST, --ima-measurement-list IMAMEASUREMENTLIST Use given IMA measurement list for keyrings and critical data extraction rather than /sys/kernel/security/ima/asciiruntimemeasurements -i IGNOREDKEYRINGS, --ignored-keyrings IGNOREDKEYRINGS Ignored the given keyring; this option may be passed multiple times -o OUTPUT, --output OUTPUT File to write JSON policy into; default is to print to stdout --no-hashes Do not add any hashes to the policy -A IMASIGNATUREKEYS, --add-ima-signature-verification-key IMASIGNATUREKEYS Add the given IMA signature verification key to the Keylime-internal 'tenant_keyring'; the key should be an x509 certificate in DER or PEM format but may also be a public or private key file; this option may be passed multiple times ``` IMA can measure which keys are loaded onto different keyrings. Keylime has the option to verify those keys and automatically use them for signature verification. The hash of the an key can be generated for example with: ``` sha256sum /etc/keys/ima/rsakey-rsa.crt.der ``` As seen the the JSON schema below, the hash (sha1 or sha256) depending on the IMA configuration can be added as the following where in .ima is the keyring the key gets loaded onto and <SHA256_HASH> is the hash of that key: ``` jq '.keyrings += {\".ima\" : [\"<SHA256HASH>\"]}' runtimepolicy.json > runtimepolicywith_keyring.json ``` The following rule should be added to the IMA policy so that IMA reports keys loaded onto keyrings .ima and .evm (since Linux 5.6): ``` measure func=KEY_CHECK keyrings=.ima|.evm ``` If the key should only be verified and not be used for IMA signature verification, then it can be added to the ignore list: ``` jq '.ima.ignoredkeyrings += [\".ima\"]' runtimepolicy.json > runtimepolicyignore_ima.json ``` If * is added no verified keyring is used for IMA signature" }, { "data": "The tenant parses the allow and exclude list into a JSON object that is then sent to the verifier. Depending of the use case the object can also be constructed manually instead of using the tenant. ``` { \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\": \"Keylime IMA policy\", \"type\": \"object\", \"properties\": { \"meta\": { \"type\": \"object\", \"properties\": { \"version\": { \"type\": \"integer\", \"description\": \"Version number of the IMA policy schema\" } }, \"required\": [\"version\"], \"additionalProperties\": false }, \"release\": { \"type\": \"number\", \"title\": \"Release version\", \"description\": \"Version of the IMA policy (arbitrarily chosen by the user)\" }, \"digests\": { \"type\": \"object\", \"title\": \"File paths and their digests\", \"patternProperties\": { \".*\": { \"type\": \"array\", \"title\": \"Path of a valid file\", \"items\": { \"type\": \"string\", \"title\": \"Hash of an valid file\" } } } }, \"excludes\": { \"type\": \"array\", \"title\": \"Excluded file paths\", \"items\": { \"type\": \"string\", \"format\": \"regex\" } }, \"keyrings\": { \"type\": \"object\", \"patternProperties\": { \".*\": { \"type\": \"string\", \"title\": \"Hash of the content in the keyring\" } } }, \"ima-buf\": { \"type\": \"object\", \"title\": \"Validation of ima-buf entries\", \"patternProperties\": { \".*\": { \"type\": \"string\", \"title\": \"Hash of the ima-buf entry\" } } }, \"verification-keys\": { \"type\": \"array\", \"title\": \"Public keys to verify IMA attached signatures\", \"items\": { \"type\": \"string\" } }, \"ima\": { \"type\": \"object\", \"title\": \"IMA validation configuration\", \"properties\": { \"ignored_keyrings\": { \"type\": \"array\", \"title\": \"Ignored keyrings for key learning\", \"description\": \"The IMA validation can learn the used keyrings embedded in the kernel. Use '*' to never learn any key from the IMA keyring measurements\", \"items\": { \"type\": \"string\", \"title\": \"Keyring name\" } }, \"loghashalg\": { \"type\": \"string\", \"title\": \"IMA entry running hash algorithm\", \"description\": \"The hash algorithm used for the running hash in IMA entries (second value). The kernel currently hardcodes it to sha1.\", \"const\": \"sha1\" } }, \"required\": [\"ignoredkeyrings\", \"loghash_alg\"], \"additionalProperties\": false } }, \"required\": [\"meta\", \"release\", \"digests\", \"excludes\", \"keyrings\", \"ima\", \"ima-buf\", \"verification-keys\"], \"additionalProperties\": false } ``` Now that we have our runtime policy available, we can send it to the verifier. Note If youre using a TPM Emulator (for example with the ansible-keylime-tpm-emulator, you will also need to run the keylime ima emulator. To do this, open a terminal and run keylimeimaemulator Using the keylime_tenant we can send the runtime policy as follows: ``` touch payload # create empty payload for example purposes keylime_tenant -c add --uuid <agent-uuid> -f payload --runtime-policy /path/to/policy.json ``` Note If your agent is already registered, you can use -c update Create a script that does anything (for example echo \"hello world\") that is not present in your runtime policy. Run the script as root on the agent machine. You will then see the following output on the verifier showing the agent status change to failed: ``` keylime.tpm - INFO - Checking IMA measurement list... keylime.ima - WARNING - File not found in allowlist: /root/evil_script.sh keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 1 hash 0 good 781 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling ``` Keylime supports the verification of IMA file signatures, which also helps to detect modifications on immutable files and can be used to complement or even replace the allowlist of hashes in the runtime policy if all relevant executables and libraries are" }, { "data": "However, the set up of a system that has all files signed is beyond the scope of this documentation. In the following we will show how files can be signed and how a system with signed files must be registered. We assume that the system has already been set up for runtime-integrity monitoring following the above steps and that the system would not show any errors on the Keylime Verifier side. It should not be registered with the keylime verifier at this point. If it is, we now deregister it: ``` keylime_tenant -c delete -u <agent-uuid> ``` Our first step is to enable IMA Appraisal in Linux. Recent Fedora kernels for example have IMA Appraisal support built-in but not activated. To enable it, we need to add the following Linux kernel parameters to the Linux boot command line: ``` imaappraise=fix imatemplate=ima-sig ima_policy=tcb ``` For this we edit /etc/default/grub and append the above parameters to the GRUBCMDLINELINUX line and then recreate the systems grub configuration file with the following command: ``` sudo grub2-mkconfig -o /boot/grub2/grub.cfg ``` IMA will be in IMA Appraisal fix-mode when the system is started up the next time. Fix-mode, unlike enforcement mode, does not require that all files be signed but will give us the benefit that the verifier receives all file signatures of signed executables. For IMA Appraisal to append the file signatures to the IMA log, we need to append the following line to the above IMA policy: ``` appraise func=BPRMCHECK fowner=0 appraisetype=imasig ``` We now create our IMA file signing key using the following commands: ``` openssl genrsa -out ima-filesigning.pem 2048 openssl rsa -in ima-filesigning.pem -pubout -out ima-pub.pem ``` Next, we determine the hash (sha1 or sha256) that IMA is using for file measurements by looking at the IMA measurement log and then use evmctl to sign a demo executable that we derive from the echo tool: ``` sudo dnf -y install ima-evm-utils cp /bin/echo ./myecho sudo evmctl ima_sign --key ima-filesigning.pem -a <hash> myecho ``` Note It is important that we use the same hash for signing the file that IMA also uses for file measurements. In the case we use sha1 since the IMA measurement log further above shows sha1 filedata-hashes in the 4th column. On more recent systems we would likely use sha256. Note If the IMA measurement log contains invalid signatures, the system will have to be rebooted to start over with a clean log that the Keylime Verifier can successfully verify. Invalid signatures may for example be in the log if executables were accidentally signed with the wrong hash, such as sha1 instead of sha256. In this case they all need to be re-signed to match the hash that IMA is using for file signatures. Another reason for an invalid signature may be that a file was modified after it was signed. Any file modification will invalidate the signature. Similarly, a malformatted or altered security.ima extended attribute will lead to a signature verification failure. Yet another reason may be that an unknown key was used for signing files. In this case the system should be re-registered with that additional key using the Keylime tenant tool. To verify that the file has been properly signed, we can use the following command, which will show the" }, { "data": "extended attributes value: ``` getfattr -m ^security.ima --dump myecho ``` We now reboot the machine: ``` reboot ``` After the reboot the IMA measurement log should not have any measurement of the myecho tool. The following command should not return anything: ``` grep myecho /sys/kernel/security/ima/asciiruntimemeasurements ``` We now create a new policy that includes the signing key using the keylimecreatepolicy tool: ``` keylimecreatepolicy -B /path/to/runtimepolicy.json -A /path/to/ima-pub.pem -o /output/path/runtimepolicywithkey.json ``` After that we register the agent with the new policy: ``` keylimetenant -c add --uuid <agent-uuid> -f payload --runtime-policy /path/to/runtimepolicywithkey.json ``` We can now execute the myecho tool as root: ``` sudo ./myecho ``` At this point we should not see any errors on the verifier side and there should be one entry of myecho in the IMA measurement log that contains a column after the file path containing the file signature: ``` grep myecho /sys/kernel/security/ima/asciiruntimemeasurements ``` To test that signature verification works, we can now invalidate the signature by appending a byte to the file and executing it again: ``` echo >> ./myecho sudo ./myecho ``` We should now see two entries in the IMA measurement log. Each one should have a different measurement: ``` grep myecho /sys/kernel/security/ima/asciiruntimemeasurements ``` The verifier log should now indicating a bad file signature: ``` keylime.tpm - INFO - Checking IMA measurement list on agent: D432FBB3-D2F1-4A97-9EF7-75BD81C00000 keylime.ima - WARNING - signature for file /home/test/myecho is not valid keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 0 hash 0 bad-sig 1 good 3042 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling ``` work-in-progress on CentOS and Fedora. Using key learning to verify files requires that files logged by IMA are appropriately signed. If files are not signed or have a bad signature then they must be either in the exclude list of the runtime policy or their hashes must be part of the runtime policy. It should also be noted that IMA signature verification provides lock-down of a system and ensures the provenance of files from a trusted source but, unlike file hashes, does not provide protection for file renaming or replacing files and signatures with other versions (downgrading). For the following setup we use RHEL 9.3 since this distribution carries file signatures in its rpm packages and the Dracut scripts have been added to load the IMA signature verification keys onto the .ima keyring. All below steps are run as root. To ensure that file signatures are installed when packages are installed, run the following command: ``` dnf -y install rpm-plugin-ima ``` Since some packages did not carry file signatures until recently, update all packages to ensure that the signatures are installed: ``` dnf -y update ``` In case the system was previously not installed with file signatures, run the following command to reinstall all packages with file signatures: ``` dnf -y reinstall \\* ``` To verify whether a particular file has its file signature installed use the following command to display the contents of security.ima. If nothing is displayed then this file misses its file signature: ``` getfattr -m ^security.ima -e hex --dump /usr/bin/bash ``` We must setup the system with the kernel command line option ima_template=ima-sig so that IMA signatures become part of the measurement" }, { "data": "It is not necessary to enable signature enforcement on the system, measuring executed applications is sufficient for the purpose of key learning. For this we edit /etc/default/grub and adjust the following line: ``` GRUBCMDLINELINUX=\"rhgb quiet ima_template=ima-sig\" ``` Then run the following command to update the kernel command line options: ``` grub2-mkconfig -o /boot/grub2/grub.conf # grub.cfg on CentOS/RHEL ``` Set the following IMA policy in /etc/ima/ima-policy when systemd will load the policy: ``` dont_measure fsmagic=0x9fa0 dont_measure fsmagic=0x62656572 dont_measure fsmagic=0x64626720 dont_measure fsmagic=0x01021994 dont_measure fsmagic=0x858458f6 dont_measure fsmagic=0x73636673 dont_measure fsmagic=0xf97cff8c dont_measure fsmagic=0x27e0eb dont_measure fsmagic=0x794c7630 measure func=KEY_CHECK keyrings=.ima measure func=BPRM_CHECK measure func=FILEMMAP mask=MAYEXEC ``` Copy IMA signature verification key(s) so that Dracut scripts can load the keys onto the .ima keyring early during system startup: ``` mkdir -p /etc/keys/ima cp /usr/share/doc/kernel-keys/$(uname -r)/ima.cer /etc/keys/ima # RHEL/CentOS ``` Enable the IMA Dracut scripts in the initramfs: ``` dracut --kver $(uname -r) --force --add integrity ``` Then reboot the system: ``` reboot ``` Once the system has been rebooted it must show at least two entries in the IMA log where keys were loaded onto the .ima keyring: grep -E .ima /sys/kernel/security/ima/asciiruntimemeasurements The first entry represents the Linux kernel signing key and the second entry is the IMA file signing key. We now create the policy: ``` grep \\ -E \"(boot_aggregate| ima-buf )\" \\ /sys/kernel/security/ima/asciiruntimemeasurements > trimmedimalog keylimecreatepolicy -k -m ./trimmedimalog -o mypolicy.json ``` The 1st command creates a trimmed-down IMA measurement log that only contains the boot_aggregate and ima-buf entries. The latter show the key(s) that were loaded onto the .ima keyring. The 2nd command creates the runtime policy that holds the boot_aggregate entry and a hash over keys that were loaded onto the .ima keyring. This hash is used to verify that only trusted keys are learned. We can now start to monitor this system: ``` touch payload # create empty payload for example purposes keylime_tenant -c update --uuid <agent-uuid> -f payload --runtime-policy ./mypolicy.json ``` In case the verification of the system fails we need to inspect the verifier log and add those files to the trimmedimalog that failed verification. Assuming files with the filename pattern livesys failed verification we repeat the steps above as follows by adding files with the file pattern livesys to the trimmed log. These files will then be verified using their hashes rather than signatures. Another possibility would be to add these files to the list of excluded files. We may need to repeat the following steps until the system passes verification: ``` grep \\ -E \"(boot_aggregate| ima-buf |livesys)\" \\ /sys/kernel/security/ima/asciiruntimemeasurements > trimmedimalog keylimecreatepolicy -k -m ./trimmedimalog -o mypolicy.json keylime_tenant -c update --uuid <agent-uuid> -f payload --runtime-policy ./mypolicy.json ``` To trigger a verification failure an unsigned application can be started: ``` cat <<EOF > test.sh echo Test EOF chmod 0755 test.sh ./test.sh ``` To re-enable the verification of the system the policy needs to be updated to contain test.sh and possibly all other applications that are not signed: -E (bootaggregate| ima-buf |test.sh) /sys/kernel/security/ima/asciiruntimemeasurements > trimmedima_log keylimecreatepolicy -k -m ./trimmedimalog -o mypolicy.json keylime_tenant -c update uuid <agent-uuid> -f payload runtime-policy ./mypolicy.json Since Keylime 6.6.0 the old JSON and flat file formats for runtime policies are deprecated. Keylime provides with keylimeconvertruntime_policy a utility to convert those into the new format. Copyright 2023, Keylime Developers. Revision 829169e6." } ]
{ "category": "Provisioning", "file_name": "installation.html.md", "project_name": "Keylime", "subcategory": "Security & Compliance" }
[ { "data": "Contents: Keylimes runtime integrity monitoring requires the set up of Linux IMA. More information about IMA in general can be found in the openSUSE Wiki. You should refer to your Linux Distributions documentation to enable IMA, but as a general guide most recent versions already have CONFIG_IMA toggled to Y as a value during Kernel compile. It is then just a case of deploying an ima-policy file. On a Fedora or Debian system, the file is located in /etc/ima/ima-policy. For configuration of your IMA policy, please refer to the IMA Documentation. Within Keylime we use the following for demonstration (found in demo/ima-policies/ima-policy-keylime): ``` dont_measure fsmagic=0x9fa0 dont_measure fsmagic=0x62656572 dont_measure fsmagic=0x64626720 dont_measure fsmagic=0x01021994 dont_measure fsmagic=0x858458f6 dont_measure fsmagic=0x73636673 dont_measure fsmagic=0xf97cff8c dont_measure fsmagic=0x27e0eb dont_measure fsmagic=0x794c7630 dontmeasure objtype=varlogt dontmeasure objtype=auditdlogt dontmeasure objtype=tmp_t measure func=BPRM_CHECK measure func=FILEMMAP mask=MAYEXEC measure func=MODULE_CHECK uid=0 ``` This default policy measures all executables in bprm_check and all files mmapped executable in file_mmap and module checks and skips several irrelevant files (logs, audit, tmp, etc). Once your ima-policy is in place, reboot your machine (or even better have it present in your image for first boot). You can then verify IMA is measuring your system: ``` PCR template-hash filedata-hash filename-hint 10 3c93cea361cd6892bc8b9e3458e22ce60ef2e632 ima-ng sha1:ac7dd11bf0e3bec9a7eb2c01e495072962fb9dfa boot_aggregate 10 3d1452eb1fcbe51ad137f3fc21d3cf4a7c2e625b ima-ng sha1:a212d835ca43d7deedd4ee806898e77eab53dafa /usr/lib/systemd/systemd 10 e213099a2bf6d88333446c5da617e327696f9eb4 ima-ng sha1:6da34b1b7d2ca0d5ca19e68119c262556a15171d /usr/lib64/ld-2.28.so 10 7efd8e2a3da367f2de74b26b84f20b37c692b9f9 ima-ng sha1:af78ea0b455f654e9237e2086971f367b6bebc5f /usr/lib/systemd/libsystemd-shared-239.so 10 784fbf69b54c99d4ae82c0be5fca365a8272414e ima-ng sha1:b0c601bf82d32ff9afa34bccbb7e8f052c48d64e /etc/ld.so.cache ``` A runtime policy in its most basic form is a set of golden cryptographic hashes of files un-tampered state or of keys that may be loaded onto keyrings for IMA verification. Keylime will load the runtime policy into the Keylime Verifier. Keylime will then poll tpm quotes to PCR 10 on the agents TPM and validate the agents file(s) state against the policy. If the object has been tampered with or an unexpected key was loaded onto a keyring, the hashes will not match and Keylime will place the agent into a failed state. Likewise, if any files invoke the actions stated in ima-policy that are not matched in the allowlist, keylime will place the agent into a failed state. Allowlists are contained in Keylime runtime policies - see below for more details. Runtime policies heavily depend on the IMA configuration and used files by the operating system. Keylime provides two helper scripts for getting started. Note Those scripts only provide a reference point to get started and not a complete solution. We encourage developers / users of Keylime to be creative and come up with their own process for securely creating and maintaining runtime policies. The first script generates a runtime policy from the initramfs, IMA log (just for the boot aggregate) and files located on the root filesystem of a running system. The createruntimepolicy.sh script is available here Run the script as follows: ``` ``` For more options see the help page" }, { "data": "``` Usage: $0 -o/--outputfile FILENAME [-a/--algo ALGO] [-x/--ramdisk-location PATH] [-y/--bootaggregate-location PATH] [-z/--rootfs-location PATH] [-e/--exclude_list FILENAME] [-s/--skip-path PATH]\" optional arguments: -a/--algo (checksum algorithmi to be used, default: sha1sum) -x/--ramdisk-location (path to initramdisk, default: /boot/, set to \"none\" to skip) -y/--bootaggregate-location (path for IMA log, used for boot aggregate extraction, default: /sys/kernel/security/ima/asciiruntime_measurements, set to \"none\" to skip) -z/--rootfs-location (path to root filesystem, default: /, cannot be skipped) -e/--exclude_list (filename containing a list of paths to be excluded (i.e., verifier will not try to match checksums), default: none) -s/--skip-path (comma-separated path list, files found there will not have checksums calculated, default: none) -h/--help show this message and exit ``` Note: note, you need the OpenSSL installed to have the sha*sum CLI executables available The resulting runtimepolicykeylime.json file can be directly used by keylime_tenant (option --runtime-policy) Warning Its best practice to create the runtime policy in a secure environment. Ideally, this should be on a fully encrypted, air gapped computer that is permanently isolated from the Internet. Disable all network cards and sign the runtime policy hash to ensure no tampering occurs when transferring to other machines. The second script allows the user to build more complex policies by providing options to include: keyring verification, IMA verification keys, generating allowlist from IMA measurement log and extending existing policies. A basic policy can be easily created by using a IMA measurement log from system: ``` keylimecreatepolicy -m /path/to/asciiruntimemeasurements -o runtime_policy.json ``` For more options see the help page keylimecreatepolicy -h: ``` usage: keylimecreatepolicy [-h] [-B BASEPOLICY] [-k] [-b] [-a ALLOWLIST] [-m IMAMEASUREMENTLIST] [-i IGNOREDKEYRINGS] [-o OUTPUT] [--no-hashes] [-A IMASIGNATUREKEYS] This is an experimental tool for adding items to a Keylime's IMA runtime policy options: -h, --help show this help message and exit -B BASEPOLICY, --base-policy BASEPOLICY Merge new data into the given JSON runtime policy -k, --keyrings Create keyrings policy entries -b, --ima-buf Process ima-buf entries other than those related to keyrings -a ALLOWLIST, --allowlist ALLOWLIST Use given plain-text allowlist -m IMAMEASUREMENTLIST, --ima-measurement-list IMAMEASUREMENTLIST Use given IMA measurement list for keyrings and critical data extraction rather than /sys/kernel/security/ima/asciiruntimemeasurements -i IGNOREDKEYRINGS, --ignored-keyrings IGNOREDKEYRINGS Ignored the given keyring; this option may be passed multiple times -o OUTPUT, --output OUTPUT File to write JSON policy into; default is to print to stdout --no-hashes Do not add any hashes to the policy -A IMASIGNATUREKEYS, --add-ima-signature-verification-key IMASIGNATUREKEYS Add the given IMA signature verification key to the Keylime-internal 'tenant_keyring'; the key should be an x509 certificate in DER or PEM format but may also be a public or private key file; this option may be passed multiple times ``` IMA can measure which keys are loaded onto different keyrings. Keylime has the option to verify those keys and automatically use them for signature verification. The hash of the an key can be generated for example with: ``` sha256sum /etc/keys/ima/rsakey-rsa.crt.der ``` As seen the the JSON schema below, the hash (sha1 or sha256) depending on the IMA configuration can be added as the following where in .ima is the keyring the key gets loaded onto and <SHA256_HASH> is the hash of that key: ``` jq '.keyrings += {\".ima\" : [\"<SHA256HASH>\"]}' runtimepolicy.json > runtimepolicywith_keyring.json ``` The following rule should be added to the IMA policy so that IMA reports keys loaded onto keyrings .ima and .evm (since Linux 5.6): ``` measure func=KEY_CHECK keyrings=.ima|.evm ``` If the key should only be verified and not be used for IMA signature verification, then it can be added to the ignore list: ``` jq '.ima.ignoredkeyrings += [\".ima\"]' runtimepolicy.json > runtimepolicyignore_ima.json ``` If * is added no verified keyring is used for IMA signature" }, { "data": "The tenant parses the allow and exclude list into a JSON object that is then sent to the verifier. Depending of the use case the object can also be constructed manually instead of using the tenant. ``` { \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\": \"Keylime IMA policy\", \"type\": \"object\", \"properties\": { \"meta\": { \"type\": \"object\", \"properties\": { \"version\": { \"type\": \"integer\", \"description\": \"Version number of the IMA policy schema\" } }, \"required\": [\"version\"], \"additionalProperties\": false }, \"release\": { \"type\": \"number\", \"title\": \"Release version\", \"description\": \"Version of the IMA policy (arbitrarily chosen by the user)\" }, \"digests\": { \"type\": \"object\", \"title\": \"File paths and their digests\", \"patternProperties\": { \".*\": { \"type\": \"array\", \"title\": \"Path of a valid file\", \"items\": { \"type\": \"string\", \"title\": \"Hash of an valid file\" } } } }, \"excludes\": { \"type\": \"array\", \"title\": \"Excluded file paths\", \"items\": { \"type\": \"string\", \"format\": \"regex\" } }, \"keyrings\": { \"type\": \"object\", \"patternProperties\": { \".*\": { \"type\": \"string\", \"title\": \"Hash of the content in the keyring\" } } }, \"ima-buf\": { \"type\": \"object\", \"title\": \"Validation of ima-buf entries\", \"patternProperties\": { \".*\": { \"type\": \"string\", \"title\": \"Hash of the ima-buf entry\" } } }, \"verification-keys\": { \"type\": \"array\", \"title\": \"Public keys to verify IMA attached signatures\", \"items\": { \"type\": \"string\" } }, \"ima\": { \"type\": \"object\", \"title\": \"IMA validation configuration\", \"properties\": { \"ignored_keyrings\": { \"type\": \"array\", \"title\": \"Ignored keyrings for key learning\", \"description\": \"The IMA validation can learn the used keyrings embedded in the kernel. Use '*' to never learn any key from the IMA keyring measurements\", \"items\": { \"type\": \"string\", \"title\": \"Keyring name\" } }, \"loghashalg\": { \"type\": \"string\", \"title\": \"IMA entry running hash algorithm\", \"description\": \"The hash algorithm used for the running hash in IMA entries (second value). The kernel currently hardcodes it to sha1.\", \"const\": \"sha1\" } }, \"required\": [\"ignoredkeyrings\", \"loghash_alg\"], \"additionalProperties\": false } }, \"required\": [\"meta\", \"release\", \"digests\", \"excludes\", \"keyrings\", \"ima\", \"ima-buf\", \"verification-keys\"], \"additionalProperties\": false } ``` Now that we have our runtime policy available, we can send it to the verifier. Note If youre using a TPM Emulator (for example with the ansible-keylime-tpm-emulator, you will also need to run the keylime ima emulator. To do this, open a terminal and run keylimeimaemulator Using the keylime_tenant we can send the runtime policy as follows: ``` touch payload # create empty payload for example purposes keylime_tenant -c add --uuid <agent-uuid> -f payload --runtime-policy /path/to/policy.json ``` Note If your agent is already registered, you can use -c update Create a script that does anything (for example echo \"hello world\") that is not present in your runtime policy. Run the script as root on the agent machine. You will then see the following output on the verifier showing the agent status change to failed: ``` keylime.tpm - INFO - Checking IMA measurement list... keylime.ima - WARNING - File not found in allowlist: /root/evil_script.sh keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 1 hash 0 good 781 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling ``` Keylime supports the verification of IMA file signatures, which also helps to detect modifications on immutable files and can be used to complement or even replace the allowlist of hashes in the runtime policy if all relevant executables and libraries are" }, { "data": "However, the set up of a system that has all files signed is beyond the scope of this documentation. In the following we will show how files can be signed and how a system with signed files must be registered. We assume that the system has already been set up for runtime-integrity monitoring following the above steps and that the system would not show any errors on the Keylime Verifier side. It should not be registered with the keylime verifier at this point. If it is, we now deregister it: ``` keylime_tenant -c delete -u <agent-uuid> ``` Our first step is to enable IMA Appraisal in Linux. Recent Fedora kernels for example have IMA Appraisal support built-in but not activated. To enable it, we need to add the following Linux kernel parameters to the Linux boot command line: ``` imaappraise=fix imatemplate=ima-sig ima_policy=tcb ``` For this we edit /etc/default/grub and append the above parameters to the GRUBCMDLINELINUX line and then recreate the systems grub configuration file with the following command: ``` sudo grub2-mkconfig -o /boot/grub2/grub.cfg ``` IMA will be in IMA Appraisal fix-mode when the system is started up the next time. Fix-mode, unlike enforcement mode, does not require that all files be signed but will give us the benefit that the verifier receives all file signatures of signed executables. For IMA Appraisal to append the file signatures to the IMA log, we need to append the following line to the above IMA policy: ``` appraise func=BPRMCHECK fowner=0 appraisetype=imasig ``` We now create our IMA file signing key using the following commands: ``` openssl genrsa -out ima-filesigning.pem 2048 openssl rsa -in ima-filesigning.pem -pubout -out ima-pub.pem ``` Next, we determine the hash (sha1 or sha256) that IMA is using for file measurements by looking at the IMA measurement log and then use evmctl to sign a demo executable that we derive from the echo tool: ``` sudo dnf -y install ima-evm-utils cp /bin/echo ./myecho sudo evmctl ima_sign --key ima-filesigning.pem -a <hash> myecho ``` Note It is important that we use the same hash for signing the file that IMA also uses for file measurements. In the case we use sha1 since the IMA measurement log further above shows sha1 filedata-hashes in the 4th column. On more recent systems we would likely use sha256. Note If the IMA measurement log contains invalid signatures, the system will have to be rebooted to start over with a clean log that the Keylime Verifier can successfully verify. Invalid signatures may for example be in the log if executables were accidentally signed with the wrong hash, such as sha1 instead of sha256. In this case they all need to be re-signed to match the hash that IMA is using for file signatures. Another reason for an invalid signature may be that a file was modified after it was signed. Any file modification will invalidate the signature. Similarly, a malformatted or altered security.ima extended attribute will lead to a signature verification failure. Yet another reason may be that an unknown key was used for signing files. In this case the system should be re-registered with that additional key using the Keylime tenant tool. To verify that the file has been properly signed, we can use the following command, which will show the" }, { "data": "extended attributes value: ``` getfattr -m ^security.ima --dump myecho ``` We now reboot the machine: ``` reboot ``` After the reboot the IMA measurement log should not have any measurement of the myecho tool. The following command should not return anything: ``` grep myecho /sys/kernel/security/ima/asciiruntimemeasurements ``` We now create a new policy that includes the signing key using the keylimecreatepolicy tool: ``` keylimecreatepolicy -B /path/to/runtimepolicy.json -A /path/to/ima-pub.pem -o /output/path/runtimepolicywithkey.json ``` After that we register the agent with the new policy: ``` keylimetenant -c add --uuid <agent-uuid> -f payload --runtime-policy /path/to/runtimepolicywithkey.json ``` We can now execute the myecho tool as root: ``` sudo ./myecho ``` At this point we should not see any errors on the verifier side and there should be one entry of myecho in the IMA measurement log that contains a column after the file path containing the file signature: ``` grep myecho /sys/kernel/security/ima/asciiruntimemeasurements ``` To test that signature verification works, we can now invalidate the signature by appending a byte to the file and executing it again: ``` echo >> ./myecho sudo ./myecho ``` We should now see two entries in the IMA measurement log. Each one should have a different measurement: ``` grep myecho /sys/kernel/security/ima/asciiruntimemeasurements ``` The verifier log should now indicating a bad file signature: ``` keylime.tpm - INFO - Checking IMA measurement list on agent: D432FBB3-D2F1-4A97-9EF7-75BD81C00000 keylime.ima - WARNING - signature for file /home/test/myecho is not valid keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 0 hash 0 bad-sig 1 good 3042 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling ``` work-in-progress on CentOS and Fedora. Using key learning to verify files requires that files logged by IMA are appropriately signed. If files are not signed or have a bad signature then they must be either in the exclude list of the runtime policy or their hashes must be part of the runtime policy. It should also be noted that IMA signature verification provides lock-down of a system and ensures the provenance of files from a trusted source but, unlike file hashes, does not provide protection for file renaming or replacing files and signatures with other versions (downgrading). For the following setup we use RHEL 9.3 since this distribution carries file signatures in its rpm packages and the Dracut scripts have been added to load the IMA signature verification keys onto the .ima keyring. All below steps are run as root. To ensure that file signatures are installed when packages are installed, run the following command: ``` dnf -y install rpm-plugin-ima ``` Since some packages did not carry file signatures until recently, update all packages to ensure that the signatures are installed: ``` dnf -y update ``` In case the system was previously not installed with file signatures, run the following command to reinstall all packages with file signatures: ``` dnf -y reinstall \\* ``` To verify whether a particular file has its file signature installed use the following command to display the contents of security.ima. If nothing is displayed then this file misses its file signature: ``` getfattr -m ^security.ima -e hex --dump /usr/bin/bash ``` We must setup the system with the kernel command line option ima_template=ima-sig so that IMA signatures become part of the measurement" }, { "data": "It is not necessary to enable signature enforcement on the system, measuring executed applications is sufficient for the purpose of key learning. For this we edit /etc/default/grub and adjust the following line: ``` GRUBCMDLINELINUX=\"rhgb quiet ima_template=ima-sig\" ``` Then run the following command to update the kernel command line options: ``` grub2-mkconfig -o /boot/grub2/grub.conf # grub.cfg on CentOS/RHEL ``` Set the following IMA policy in /etc/ima/ima-policy when systemd will load the policy: ``` dont_measure fsmagic=0x9fa0 dont_measure fsmagic=0x62656572 dont_measure fsmagic=0x64626720 dont_measure fsmagic=0x01021994 dont_measure fsmagic=0x858458f6 dont_measure fsmagic=0x73636673 dont_measure fsmagic=0xf97cff8c dont_measure fsmagic=0x27e0eb dont_measure fsmagic=0x794c7630 measure func=KEY_CHECK keyrings=.ima measure func=BPRM_CHECK measure func=FILEMMAP mask=MAYEXEC ``` Copy IMA signature verification key(s) so that Dracut scripts can load the keys onto the .ima keyring early during system startup: ``` mkdir -p /etc/keys/ima cp /usr/share/doc/kernel-keys/$(uname -r)/ima.cer /etc/keys/ima # RHEL/CentOS ``` Enable the IMA Dracut scripts in the initramfs: ``` dracut --kver $(uname -r) --force --add integrity ``` Then reboot the system: ``` reboot ``` Once the system has been rebooted it must show at least two entries in the IMA log where keys were loaded onto the .ima keyring: grep -E .ima /sys/kernel/security/ima/asciiruntimemeasurements The first entry represents the Linux kernel signing key and the second entry is the IMA file signing key. We now create the policy: ``` grep \\ -E \"(boot_aggregate| ima-buf )\" \\ /sys/kernel/security/ima/asciiruntimemeasurements > trimmedimalog keylimecreatepolicy -k -m ./trimmedimalog -o mypolicy.json ``` The 1st command creates a trimmed-down IMA measurement log that only contains the boot_aggregate and ima-buf entries. The latter show the key(s) that were loaded onto the .ima keyring. The 2nd command creates the runtime policy that holds the boot_aggregate entry and a hash over keys that were loaded onto the .ima keyring. This hash is used to verify that only trusted keys are learned. We can now start to monitor this system: ``` touch payload # create empty payload for example purposes keylime_tenant -c update --uuid <agent-uuid> -f payload --runtime-policy ./mypolicy.json ``` In case the verification of the system fails we need to inspect the verifier log and add those files to the trimmedimalog that failed verification. Assuming files with the filename pattern livesys failed verification we repeat the steps above as follows by adding files with the file pattern livesys to the trimmed log. These files will then be verified using their hashes rather than signatures. Another possibility would be to add these files to the list of excluded files. We may need to repeat the following steps until the system passes verification: ``` grep \\ -E \"(boot_aggregate| ima-buf |livesys)\" \\ /sys/kernel/security/ima/asciiruntimemeasurements > trimmedimalog keylimecreatepolicy -k -m ./trimmedimalog -o mypolicy.json keylime_tenant -c update --uuid <agent-uuid> -f payload --runtime-policy ./mypolicy.json ``` To trigger a verification failure an unsigned application can be started: ``` cat <<EOF > test.sh echo Test EOF chmod 0755 test.sh ./test.sh ``` To re-enable the verification of the system the policy needs to be updated to contain test.sh and possibly all other applications that are not signed: -E (bootaggregate| ima-buf |test.sh) /sys/kernel/security/ima/asciiruntimemeasurements > trimmedima_log keylimecreatepolicy -k -m ./trimmedimalog -o mypolicy.json keylime_tenant -c update uuid <agent-uuid> -f payload runtime-policy ./mypolicy.json Since Keylime 6.6.0 the old JSON and flat file formats for runtime policies are deprecated. Keylime provides with keylimeconvertruntime_policy a utility to convert those into the new format. Copyright 2023, Keylime Developers. Revision 829169e6." } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "kube-bench", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "kube-bench", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "kube-bench", "subcategory": "Security & Compliance" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "kube-hunter", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "kube-hunter", "subcategory": "Security & Compliance" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "kube-hunter", "subcategory": "Security & Compliance" }
[ { "data": "A ServiceAccount provides an identity for processes that run in a Pod. A process inside a Pod can use the identity of its associated service account to authenticate to the cluster's API server. For an introduction to service accounts, read configure service accounts. This task guide explains some of the concepts behind ServiceAccounts. The guide also explains how to obtain or revoke tokens that represent ServiceAccounts. You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds: To be able to follow these steps exactly, ensure you have a namespace named examplens. If you don't, create one by running: ``` kubectl create namespace examplens ``` Kubernetes distinguishes between the concept of a user account and a service account for a number of reasons: ServiceAccount tokens can be bound to API objects that exist in the kube-apiserver. This can be used to tie the validity of a token to the existence of another API object. Supported object types are as follows: When a token is bound to an object, the object's metadata.name and metadata.uid are stored as extra 'private claims' in the issued JWT. When a bound token is presented to the kube-apiserver, the service account authenticator will extract and verify these claims. If the referenced object no longer exists (or its metadata.uid does not match), the request will not be authenticated. When a service account token is bound to a Pod object, additional metadata is also embedded into the token that indicates the value of the bound pod's spec.nodeName field, and the uid of that Node, if available. This node information is not verified by the kube-apiserver when the token is used for authentication. It is included so integrators do not have to fetch Pod or Node API objects to check the associated Node name and uid when inspecting a JWT. The TokenReview API can be used to verify and extract private claims from a token: ``` kubectl create token my-sa --bound-object-kind=\"Pod\" --bound-object-name=\"test-pod\" ``` ``` apiVersion: authentication.k8s.io/v1 kind: TokenReview spec: token: <token from step 2> ``` ``` kubectl create -o yaml -f tokenreview.yaml # we use '-o yaml' so we can inspect the output ``` You should see an output like below: ``` apiVersion: authentication.k8s.io/v1 kind: TokenReview metadata: creationTimestamp: null spec: token: <token> status: audiences: https://kubernetes.default.svc.cluster.local authenticated: true user: extra: authentication.kubernetes.io/credential-id: JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc authentication.kubernetes.io/node-name: kind-control-plane authentication.kubernetes.io/node-uid: 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6 authentication.kubernetes.io/pod-name: test-pod authentication.kubernetes.io/pod-uid: e87dbbd6-3d7e-45db-aafb-72b24627dff5 groups: system:serviceaccounts system:serviceaccounts:default system:authenticated uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e username: system:serviceaccount:default:my-sa ``` By default, the Kubernetes control plane (specifically, the ServiceAccount admission controller) adds a projected volume to Pods, and this volume includes a token for Kubernetes API access. Here's an example of how that looks for a launched Pod: ``` ... name: kube-api-access-<random-suffix> projected: sources: serviceAccountToken: path: token # must match the path the app expects configMap: items: key: ca.crt path: ca.crt name: kube-root-ca.crt downwardAPI: items: fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace ``` That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are: Any container within the Pod that mounts this particular volume can access the above" }, { "data": "Versions of Kubernetes before v1.22 automatically created credentials for accessing the Kubernetes API. This older mechanism was based on creating token Secrets that could then be mounted into running Pods. In more recent versions, including Kubernetes v1.30, API credentials are obtained directly using the TokenRequest API, and are mounted into Pods using a projected volume. The tokens obtained using this method have bounded lifetimes, and are automatically invalidated when the Pod they are mounted into is deleted. You can still manually create a Secret to hold a service account token; for example, if you need a token that never expires. Once you manually create a Secret and link it to a ServiceAccount, the Kubernetes control plane automatically populates the token into that Secret. Before version 1.24, Kubernetes automatically generated Secret-based tokens for ServiceAccounts. To distinguish between automatically generated tokens and manually created ones, Kubernetes checks for a reference from the ServiceAccount's secrets field. If the Secret is referenced in the secrets field, it is considered an auto-generated legacy token. Otherwise, it is considered a manually created legacy token. For example: ``` apiVersion: v1 kind: ServiceAccount metadata: name: build-robot namespace: default secrets: name: build-robot-secret # usually NOT present for a manually generated token ``` Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). Tokens that continue to be unused for this defined period (again, by default, one year) will subsequently be purged by the control plane. If users use an invalidated auto-generated token, the token validator will When receiving this validation error, users can update the Secret to remove the kubernetes.io/legacy-token-invalid-since label to temporarily allow use of this token. Here's an example of an auto-generated legacy token that has been marked with the kubernetes.io/legacy-token-last-used and kubernetes.io/legacy-token-invalid-since labels: ``` apiVersion: v1 kind: Secret metadata: name: build-robot-secret namespace: default labels: kubernetes.io/legacy-token-last-used: 2022-10-24 kubernetes.io/legacy-token-invalid-since: 2023-10-25 annotations: kubernetes.io/service-account.name: build-robot type: kubernetes.io/service-account-token ``` A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace. The service account token controller runs as part of kube-controller-manager. This controller acts asynchronously. It: You must pass a service account private key file to the token controller in the kube-controller-manager using the --service-account-private-key-file flag. The private key is used to sign generated service account tokens. Similarly, you must pass the corresponding public key to the kube-apiserver using the --service-account-key-file flag. The public key will be used to verify the tokens during authentication. The modification of pods is implemented via a plugin called an Admission Controller. It is part of the API server. This admission controller acts synchronously to modify pods as they are created. When this plugin is active (and it is by default on most distributions), then it does the following when a Pod is created: This controller generates a ConfigMap called kube-system/kube-apiserver-legacy-service-account-token-tracking in the kube-system namespace. The ConfigMap records the timestamp when legacy service account tokens began to be monitored by the system. The legacy ServiceAccount token cleaner runs as part of the kube-controller-manager and checks every 24 hours to see if any auto-generated legacy ServiceAccount token has not been used in a specified amount of time. If so, the cleaner marks those tokens as invalid. The cleaner works by first checking the ConfigMap created by the control plane (provided that LegacyServiceAccountTokenTracking is" }, { "data": "If the current time is a specified amount of time after the date in the ConfigMap, the cleaner then loops through the list of Secrets in the cluster and evaluates each Secret that has the type kubernetes.io/service-account-token. If a Secret meets all of the following conditions, the cleaner marks it as invalid: The cleaner marks a Secret invalid by adding a label called kubernetes.io/legacy-token-invalid-since to the Secret, with the current date as the value. If an invalid Secret is not used in a specified amount of time, the cleaner will delete it. You use the TokenRequest subresource of a ServiceAccount to obtain a time-bound token for that ServiceAccount. You don't need to call this to obtain an API token for use within a container, since the kubelet sets this up for you using a projected volume. If you want to use the TokenRequest API from kubectl, see Manually create an API token for a ServiceAccount. The Kubernetes control plane (specifically, the ServiceAccount admission controller) adds a projected volume to Pods, and the kubelet ensures that this volume contains a token that lets containers authenticate as the right ServiceAccount. (This mechanism superseded an earlier mechanism that added a volume based on a Secret, where the Secret represented the ServiceAccount for the Pod but did not expire.) Here's an example of how that looks for a launched Pod: ``` ... name: kube-api-access-<random-suffix> projected: defaultMode: 420 # decimal equivalent of octal 0644 sources: serviceAccountToken: expirationSeconds: 3607 path: token configMap: items: key: ca.crt path: ca.crt name: kube-root-ca.crt downwardAPI: items: fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace ``` That manifest snippet defines a projected volume that combines information from three sources: Any container within the Pod that mounts this volume can access the above information. To create a non-expiring, persisted API token for a ServiceAccount, create a Secret of type kubernetes.io/service-account-token with an annotation referencing the ServiceAccount. The control plane then generates a long-lived token and updates that Secret with that generated token data. Here is a sample manifest for such a Secret: ``` apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: mysecretname annotations: kubernetes.io/service-account.name: myserviceaccount ``` To create a Secret based on this example, run: ``` kubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml ``` To see the details for that Secret, run: ``` kubectl -n examplens describe secret mysecretname ``` The output is similar to: ``` Name: mysecretname Namespace: examplens Labels: <none> Annotations: kubernetes.io/service-account.name=myserviceaccount kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1362 bytes namespace: 9 bytes token: ... ``` If you launch a new Pod into the examplens namespace, it can use the myserviceaccount service-account-token Secret that you just created. If you know the name of the Secret that contains the token you want to remove: ``` kubectl delete secret name-of-secret ``` Otherwise, first find the Secret for the ServiceAccount. ``` kubectl -n examplens get serviceaccount/example-automated-thing -o yaml ``` The output is similar to: ``` apiVersion: v1 kind: ServiceAccount metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}} creationTimestamp: \"2019-07-21T07:07:07Z\" name: example-automated-thing namespace: examplens resourceVersion: \"777\" selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing uid: f23fd170-66f2-4697-b049-e1e266b7f835 secrets: name: example-automated-thing-token-zyxwv ``` Then, delete the Secret you now know the name of: ``` kubectl -n examplens delete secret/example-automated-thing-token-zyxwv ``` If you created a namespace examplens to experiment with, you can remove it: ``` kubectl delete namespace examplens ``` Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "kube-hunter", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "deployment_guide.md", "project_name": "KubeArmor", "subcategory": "Security & Compliance" }
[ { "data": "This guide assumes you have access to a k8s cluster. If you want to try non-k8s mode, for instance systemd mode to protect/audit containers or processes on VMs/bare-metal, check here. Check the KubeArmor support matrix to verify if your platform is supported. ``` helm repo add kubearmor https://kubearmor.github.io/charts helm repo update kubearmor helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/pkg/KubeArmorOperator/config/samples/sample-config.yml``` You can find more details about helm related values and configurations here. ``` curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin [!NOTE] kArmor CLI provides a Developer Friendly way to interact with KubeArmor Telemetry. You can stream KubeArmor telemetry independently of kArmor CLI tool and integrate it with your chosen SIEM (Security Information and Event Management) solutions. Here's a guide on how to achieve this integration. This guide assumes you have kArmor CLI to access KubeArmor Telemetry but you can view it on your SIEM tool once integrated. ``` kubectl create deployment nginx --image=nginx POD=$(kubectl get pod -l app=nginx -o name)``` [!NOTE] $POD is used to refer to the target nginx pod in many cases below. Package management tools can be used in the runtime env to download new binaries that will increase the attack surface of the pods. Attackers use package management tools to download accessory tooling (such as masscan) to further their cause. It is better to block usage of package management tools in production environments. Lets apply the policy to block such execution: ``` cat <<EOF | kubectl apply -f - apiVersion: security.kubearmor.com/v1 kind: KubeArmorPolicy metadata: name: block-pkg-mgmt-tools-exec spec: selector: matchLabels: app: nginx process: matchPaths: path: /usr/bin/apt path: /usr/bin/apt-get action: Block EOF``` Now execute the apt command to download the masscan tool. ``` kubectl exec -it $POD -- bash -c \"apt update && apt install masscan\"``` It will be denied permission to execute. ``` sh: 1: apt: Permission denied command terminated with exit code 126``` If you don't see Permission denied please refer here to debug this issue ``` karmor logs -n default --json``` ``` { \"Timestamp\": 1686475183, \"UpdatedTime\": \"2023-06-11T09:19:43.451704Z\", \"ClusterName\": \"default\", \"HostName\": \"ip-172-31-24-142\", \"NamespaceName\": \"default\", \"PodName\": \"nginx-8f458dc5b-fl42t\", \"Labels\": \"app=nginx\", \"ContainerID\": \"8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000\", \"ContainerName\": \"nginx\", \"ContainerImage\": \"docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305\", \"HostPPID\": 3341922, \"HostPID\": 3341928, \"PPID\": 786, \"PID\": 792, \"ParentProcessName\": \"/bin/dash\", \"ProcessName\": \"/usr/bin/apt\", \"PolicyName\": \"block-pkg-mgmt-tools-exec\", \"Severity\": \"1\", \"Type\": \"MatchedPolicy\", \"Source\": \"/bin/dash\", \"Operation\": \"Process\", \"Resource\": \"/usr/bin/apt update\", \"Data\": \"syscall=SYS_EXECVE\", \"Enforcer\": \"BPFLSM\", \"Action\": \"Block\", \"Result\": \"Permission denied\" }``` K8s mounts the service account token by default in each pod even if there is no app using it. Attackers use these service account tokens to do lateral movements. For e.g., to access service account token: ``` kubectl exec -it $POD -- bash (inside pod) $ curl https://$KUBERNETESPORT443TCPADDR/api --insecure --header \"Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)\" { \"kind\": \"APIVersions\", \"versions\": [ \"v1\" ], \"serverAddressByClientCIDRs\": [ { \"clientCIDR\": \"0.0.0.0/0\", \"serverAddress\": \"ip-10-0-48-51.us-east-2.compute.internal:443\" } ] }``` Thus we can see that one can use the service account token to access the Kube API" }, { "data": "Lets apply a policy to block access to service account token: ``` cat <<EOF | kubectl apply -f - apiVersion: security.kubearmor.com/v1 kind: KubeArmorPolicy metadata: name: block-service-access-token-access spec: selector: matchLabels: app: nginx file: matchDirectories: dir: /run/secrets/kubernetes.io/serviceaccount/ recursive: true action: Block EOF``` Now when anyone tries to access to service account token, it would be Permission Denied. ``` kubectl exec -it $POD -- bash (inside pod) $ curl https://$KUBERNETESPORT443TCPADDR/api --insecure --header \"Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)\" cat: /run/secrets/kubernetes.io/serviceaccount/token: Permission denied { \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/api\\\"\", \"reason\": \"Forbidden\", \"details\": {}, \"code\": 403 }``` If you don't see Permission denied please refer here to debug this issue. Access to certain folders/paths might have to be audited for compliance/reporting reasons. File Visibility is disabled by default to minimize telemetry. Some file based policies will need that enabled. To enable file visibility on a namespace level: ``` kubectl annotate ns default kubearmor-visibility=\"process,file,network\" --overwrite``` For more details on this: https://docs.kubearmor.io/kubearmor/documentation/kubearmor_visibility#updating-namespace-visibility Lets audit access to /etc/nginx/ folder within the deployment. ``` cat <<EOF | kubectl apply -f - apiVersion: security.kubearmor.com/v1 kind: KubeArmorPolicy metadata: name: audit-etc-nginx-access spec: selector: matchLabels: app: nginx file: matchDirectories: dir: /etc/nginx/ recursive: true action: Audit EOF``` Note: karmor logs -n default would show all the audit/block operations. ``` { \"Timestamp\": 1686478371, \"UpdatedTime\": \"2023-06-11T10:12:51.967519Z\", \"ClusterName\": \"default\", \"HostName\": \"ip-172-31-24-142\", \"NamespaceName\": \"default\", \"PodName\": \"nginx-8f458dc5b-fl42t\", \"Labels\": \"app=nginx\", \"ContainerID\": \"8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000\", \"ContainerName\": \"nginx\", \"ContainerImage\": \"docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305\", \"HostPPID\": 3224933, \"HostPID\": 3371357, \"PPID\": 3224933, \"PID\": 825, \"ParentProcessName\": \"/x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2\", \"ProcessName\": \"/bin/cat\", \"PolicyName\": \"audit-etc-nginx-access\", \"Severity\": \"1\", \"Type\": \"MatchedPolicy\", \"Source\": \"/bin/cat /etc/nginx/conf.d/default.conf\", \"Operation\": \"File\", \"Resource\": \"/etc/nginx/conf.d/default.conf\", \"Data\": \"syscall=SYSOPENAT fd=-100 flags=ORDONLY\", \"Enforcer\": \"eBPF Monitor\", \"Action\": \"Audit\", \"Result\": \"Passed\" }``` Least permissive policies require one to allow certain actions/operations and deny rest. With KubeArmor it is possible to specify as part of the policy as to what actions should be allowed and deny/audit the rest. Security Posture defines what happens to the operations that are not in the allowed list. Should it be audited (allow but alert), or denied (block and alert)? By default the security posture is set to audit. Lets change the security posture to default deny. ``` kubectl annotate ns default kubearmor-file-posture=block --overwrite``` ``` cat <<EOF | kubectl apply -f - apiVersion: security.kubearmor.com/v1 kind: KubeArmorPolicy metadata: name: only-allow-nginx-exec spec: selector: matchLabels: app: nginx file: matchDirectories: dir: / recursive: true process: matchPaths: path: /usr/sbin/nginx path: /bin/bash action: Allow EOF``` Observe that the policy contains Allow action. Once there is any KubeArmor policy having Allow action then the pods enter least permissive mode, allowing only explicitly allowed operations. Note: Use kubectl port-forward $POD --address 0.0.0.0 8080:80 to access nginx and you can see that the nginx web access still works normally. Lets try to execute some other processes: ``` kubectl exec -it $POD -- bash -c \"chroot\"``` Any binary other than bash and nginx would be permission denied. If you don't see Permission denied please refer here to debug this issue Last updated 4 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "getting-started.md#examples.md", "project_name": "Kubescape", "subcategory": "Security & Compliance" }
[ { "data": "Using tools such as Kubescape to help remediate security issues in your cluster is crucial to any security plan. With Kubescape, you can scan your manifests, Helm charts, or active clusters for compliance with security frameworks such as NSA, MITRE, SOC2, and more. While Kubescape significantly increases your security compliance with active clusters, it works even better when your scans are automatically run against any manifest files as soon as they are pushed to your repository and, ideally, before merging to the main branch. Lets look at how to configure the GitHub Action to automatically run scans based on your chosen frameworks, ignore acceptable risks, and improve your security posture whenever a pull request is created against your repository. We use GitHub in this example, but you can apply the same principles to other Git vendors. The Kubernetes manifest well be scanning is a simple one. This way, we can focus on the task at hand, improving the security posture. If youd like to follow along, go ahead and create a new repository and add a README.md file to the main branch. Pull the repo to your local IDE of choice. Ill be using Codespaces through this tutorial to keep it simple. Once youve done that, lets get Github Actions configured. GitHub Actions is usually set by default, but just to be sure, head to Settings, Actions, and General, and ensure Allow all actions and reusable workflows is selected: Head to https://github.com/kubescape/github-action to see the documentation. Many of the Action examples shown require GitHub code scanning to be enabled. We'll remove that functionality in the following step since code scanning is only available in public and enterprise repositories. To create the action, you can use the GitHub UI or create the file manually. Well create the file manually within the IDE to keep things simple. Create a main.yml file within a newly created .github/workflows/ directory mkdir -p .github/workflows && touch .github/workflows/main.yml Open main.yml and paste in the following code to get us started: ``` name: Kubescape scanning for misconfigurations on: [pull_request] jobs: kubescape: runs-on: ubuntu-latest permissions: actions: read contents: read security-events: write steps: uses: actions/checkout@v3 uses: kubescape/github-action@main continue-on-error: false with: frameworks: NSA,MITRE verbose: true severityThreshold: low ``` This is going to run the action any time there is a pull request in the repository. It will run the Kubescan CLI with the NSA Framework (PDF Link) and the MITRE Framework. To read more about the frameworks available, you can find that information here:" }, { "data": "I have also added a few other customizations to provide for verbose output and set the severityThreshold to ensure any controls will cause the action to fail. . Push this to your repository: ``` git add . git commit -m \"Added Kubescape Workflow\" git push -u origin main ``` First, create a new branch called dev: git checkout -b dev Were going to do a very simple manifest that will deploy an Nginx pod to start. Although simple, it still contains several security vulnerabilities, as you can see. Create a file called nginx.yml in the root directory of your repository and add the following: ``` apiVersion: v1 kind: Pod metadata: name: super-risky-pod spec: containers: name: nginx image: nginx securityContext: privileged: true runAsUser: 0 ``` Now, were going to test the action! Create a new branch, commit the changes, and push to the new branch: ``` git add . git commit -m \"Added nginx.yml manifest\" git push origin dev ``` Now create the pull request. The --fill will use the commit message as the PR description. Feel free to change that using the --title and --body flags: gh pr create --fill You should see the pull request in your GitHub dashboard: Open the pull request, and you should see All checks have failed. since we added insecure code: Open the details and take a look and youll see a full report with, at the time of this writing, seven failed controls and the links and assisted remediation for each: If you click on each docs link, you can see the information about each control and how to remediate it. In the next section, well use various methods to clean up these failures. Now that weve seen how bad our security footprint is for this pod, lets start fixing the errors. In this example, were going to remediate for c-0057, c-0009, c-0017, c-0016, and c-0013. Well deal with c-0055 and c-0030 in the next step. Use the following content for nginx.yml and note the changes. ``` apiVersion: v1 kind: Pod metadata: name: super-risky-pod spec: securityContext: runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 containers: name: nginx image: nginx securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" ``` By using the docs, you can see pretty quickly how to remediate these issues. Ill just comment out the removals for clarity, but obviously, you would delete the lines entirely in production. Lets push these changes to our pull request and see how we did: ``` git add" }, { "data": "git commit -m \"remediated security issues\" git push origin dev ``` This will kick off another run since its a modification to the same branch with an open pull request. Check out the details of the run to see how it went: It appears weve fixed all but two control violations. Well look at how to ignore those in the next section. Alright! Our remediation efforts went well. There are still two more risks to consider. Lets say that, after consulting with your team, youve decided you arent concerned with them. Kubescape allows several options to ignore these. You can set the severityThreshold to ignore them, but since other controls are also of medium severity that you do care about, its probably best not to go that route. If you look at the Overall compliance-score at the bottom of the output, you see its a 93. You can also set a failedThreshold to ignore that, but that option may also pose a risk depending on your requirements. To be more precise, lets manually specify these controls in an exceptions file and instruct Kubescape to ignore them. Create an exceptions.json file and add the following content: ``` [ { \"name\": \"exclude-linux-hardening-policy-control\", \"policyType\": \"postureExceptionPolicy\", \"actions\": [ \"alertOnly\" ], \"resources\": [ { \"designatorType\": \"Attributes\", \"attributes\": { \"kind\": \".*\" } } ], \"posturePolicies\": [ { \"controlID\": \"C-0055\" } ] }, { \"name\": \"exclude-egress-and-ingress-blocked-policy-control\", \"policyType\": \"postureExceptionPolicy\", \"actions\": [ \"alertOnly\" ], \"resources\": [ { \"designatorType\": \"Attributes\", \"attributes\": { \"kind\": \".*\" } } ], \"posturePolicies\": [ { \"controlID\": \"C-0030\" } ] } ] ``` Now, add the following exceptions directive (highlighted in green) to the .github/workflows/main.yml file: ``` name: Kubescape scanning for misconfigurations on: [pull_request] jobs: kubescape: runs-on: ubuntu-latest permissions: actions: read contents: read security-events: write steps: uses: actions/checkout@v3 uses: kubescape/github-action@main continue-on-error: false with: frameworks: NSA,MITRE verbose: true severityThreshold: low exceptions: exceptions.json ``` Lets push the code and cross our fingers for all green checks! ``` git add . git commit -m \"added exceptions for c-0055 and c-0030\" git push origin dev ``` All green! If you click on the details and the Kubescape/github-action@main step, youll see a 100% pass: You can now merge your PR and enjoy peace of mind that your cluster is secure, thanks to Kubescape! This has been a quick run-through of Kubescapes GitHub Action. Although this was a simple scan, its power is scalable to extremely large production workloads and can help you keep your security footprint at a minimum. With a direct integration to the rest of the Kubescape ecosystem, such as the CLI, Cloud, and more, it is the ideal tool for maintaining your Kubernetes security." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Kubescape", "subcategory": "Security & Compliance" }
[ { "data": "``` package example import rego.v1``` OPA is purpose built for reasoning about information represented in structured documents. The data that your service and its users publish can be inspected and transformed using OPAs native query language Rego. Rego was inspired by Datalog, which is a well understood, decades old query language. Rego extends Datalog to support structured document models such as JSON. Rego queries are assertions on data stored in OPA. These queries can be used to define policies that enumerate instances of data that violate the expected state of the system. Use Rego for defining policy that is easy to read and write. Rego focuses on providing powerful support for referencing nested documents and ensuring that queries are correct and unambiguous. Rego is declarative so policy authors can focus on what queries should return rather than how queries should be executed. These queries are simpler and more concise than the equivalent in an imperative language. Like other applications which support declarative query languages, OPA is able to optimize queries to improve performance. In while reviewing the examples below, you might find it helpful to follow along using the online OPA playground. The playground also allows sharing of examples via URL which can be helpful when asking questions on the OPA Slack. In addition to these official resources, you may also be interested to check out the community learning materials and tools. These 8 projects are listed on the OPA Ecosystem page as related to learning Rego. This section introduces the main aspects of Rego. The simplest rule is a single expression and is defined in terms of a Scalar Value: ``` pi := 3.14159``` Rules define the content of documents. We can query for the content of the pi document generated by the rule above: ``` pi``` ``` 3.14159``` Rules can also be defined in terms of Composite Values: ``` rect := {\"width\": 2, \"height\": 4}``` The result: ``` rect``` ``` { \"height\": 4, \"width\": 2 }``` You can compare two scalar or composite values, and when you do so you are checking if the two values are the same JSON value. ``` rect == {\"height\": 4, \"width\": 2}``` ``` true``` You can define a new concept using a rule. For example, v below is true if the equality expression is true. ``` v if \"hello\" == \"world\"``` If we evaluate v, the result is undefined because the body of the rule never evaluates to true. As a result, the document generated by the rule is not defined. ``` v``` ``` undefined decision``` Expressions that refer to undefined values are also undefined. This includes comparisons such as !=. ``` v == true``` ``` undefined decision``` ``` v != true``` ``` undefined decision``` We can define rules in terms of Variables as well: ``` t if { x := 42; y := 41; x > y }``` The formal syntax uses the semicolon character ; to separate expressions. Rule bodies can separate expressions with newlines and omit the semicolon: ``` t2 if { x := 42 y := 41 x > y }``` Note that the future keyword if is optional. We could have written v and t2 like this: ``` v { \"hello\" == \"world\" } t2 { x := 42 y := 41 x > y }``` When evaluating rule bodies, OPA searches for variable bindings that make all of the expressions true. There may be multiple sets of bindings that make the rule body true. The rule body can be understood intuitively as: ``` expression-1 AND expression-2 AND" }, { "data": "AND expression-N ``` The rule itself can be understood intuitively as: ``` rule-name IS value IF body ``` If the value is omitted, it defaults to true. When we query for the value of t2 we see the obvious result: ``` t``` ``` true``` Rego References help you refer to nested documents. For example, with: ``` sites := [{\"name\": \"prod\"}, {\"name\": \"smoke1\"}, {\"name\": \"dev\"}]``` And ``` r if { some site in sites site.name == \"prod\" }``` The rule r above asserts that there exists (at least) one document within sites where the name attribute equals \"prod\". The result: ``` r``` ``` true``` We can generalize the example above with a rule that defines a set document instead of a boolean document: ``` q contains name if { some site in sites name := site.name }``` The value of q is a set of names ``` q``` ``` [ \"dev\", \"prod\", \"smoke1\" ]``` We can re-write the rule r from above to make use of q. We will call the new rule p: ``` p if q[\"prod\"]``` Querying p will have the same result: ``` p``` ``` true``` As you can see, rules which have arguments can be queried with input values: ``` q[\"smoke2\"]``` ``` undefined decision``` If you made it this far, congratulations! This section introduced the main aspects of Rego. The rest of this document walks through each part of the language in more detail. For a concise reference, see the Policy Reference document. Scalar values are the simplest type of term in Rego. Scalar values can be Strings, numbers, booleans, or null. Documents can be defined solely in terms of scalar values. This is useful for defining constants that are referenced in multiple places. For example: ``` greeting := \"Hello\" max_height := 42 pi := 3.14159 allowed := true location := null``` These documents can be queried like any other: ``` [greeting, max_height, pi, allowed, location]``` ``` [ \"Hello\", 42, 3.14159, true, null ]``` Rego supports two different types of syntax for declaring strings. The first is likely to be the most familiar: characters surrounded by double quotes. In such strings, certain characters must be escaped to appear in the string, such as double quotes themselves, backslashes, etc. See the Policy Reference for a formal definition. The other type of string declaration is a raw string declaration. These are made of characters surrounded by backticks (`), with the exception that raw strings may not contain backticks themselves. Raw strings are what they sound like: escape sequences are not interpreted, but instead taken as the literal text inside the backticks. For example, the raw string `hello\\there` will be the text hello\\there, not hello and here separated by a tab. Raw strings are particularly useful when constructing regular expressions for matching, as it eliminates the need to double escape special characters. A simple example is a regex to match a valid Rego variable. With a regular string, the regex is \"[a-zA-Z]\\\\w*\", but with raw strings, it becomes `[a-zA-Z]\\w*`. Composite values define collections. In simple cases, composite values can be treated as constants like Scalar Values: ``` cuboid := {\"width\": 3, \"height\": 4, \"depth\": 5}``` The result: ``` cuboid.width``` ``` 3``` Composite values can also be defined in terms of Variables or" }, { "data": "For example: ``` a := 42 b := false c := null d := {\"a\": a, \"x\": [b, c]}``` ``` +-+-+++ | a | b | c | d | +-+-+++ | 42 | false | null | {\"a\":42,\"x\":[false,null]} | +-+-+++``` By defining composite values in terms of variables and references, rules can define abstractions over raw data and other rules. Arrays are ordered collections of values. Arrays in Rego are zero-indexed, and may contain any value, including variable references. ``` arr := [1, \"two\", 3]``` ``` last := arr[2]``` ``` ++ | last | ++ | 3 | ++``` Use arrays when order matters and duplicate values should be allowed. Objects are unordered key-value collections. In Rego, any value type can be used as an object key. For example, the following assignment maps port numbers to a list of IP addresses (represented as strings). ``` ipsbyport := { 80: [\"1.1.1.1\", \"1.1.1.2\"], 443: [\"2.2.2.1\"], }``` ``` ipsbyport[80]``` ``` [ \"1.1.1.1\", \"1.1.1.2\" ]``` ``` some port; ipsbyport == \"2.2.2.1\"``` ``` ++ | port | ++ | 443 | ++``` When Rego values are converted to JSON non-string object keys are marshalled as strings (because JSON does not support non-string object keys). ``` ipsbyport``` ``` { \"443\": [ \"2.2.2.1\" ], \"80\": [ \"1.1.1.1\", \"1.1.1.2\" ] }``` In addition to arrays and objects, Rego supports set values. Sets are unordered collections of unique values. Just like other composite values, sets can be defined in terms of scalars, variables, references, and other composite values. For example: ``` s := {cuboid.width, cuboid.height, cuboid.depth}``` ``` ++ | s | ++ | [3,4,5] | ++``` Set documents are collections of values without keys. OPA represents set documents as arrays when serializing to JSON or other formats that do not support a set data type. The important distinction between sets and arrays or objects is that sets are unkeyed while arrays and objects are keyed, i.e., you cannot refer to the index of an element within a set. When comparing sets, the order of elements does not matter: ``` {1,2,3} == {3,1,2}``` ``` true``` Because sets are unordered, variables inside sets must be unified with a ground value outside of the set. If the variable is not unified with a ground value outside the set, OPA will complain: ``` {1,2,3} == {3,x,2}``` ``` 1 error occurred: 1:1: regounsafevar_error: var x is unsafe``` Because sets share curly-brace syntax with objects, and an empty object is defined with {}, an empty set has to be constructed with a different syntax: ``` count(set())``` ``` 0``` Variables are another kind of term in Rego. They appear in both the head and body of rules. Variables appearing in the head of a rule can be thought of as input and output of the rule. Unlike many programming languages, where a variable is either an input or an output, in Rego a variable is simultaneously an input and an output. If a query supplies a value for a variable, that variable is an input, and if the query does not supply a value for a variable, that variable is an output. For example: ``` sites := [ {\"name\": \"prod\"}, {\"name\": \"smoke1\"}, {\"name\": \"dev\"} ] q contains name if { some site in sites name := site.name }``` In this case, we evaluate q with a variable x (which is not bound to a value). As a result, the query returns all of the values for x and all of the values for q[x], which are always the same because q is a" }, { "data": "``` q[x]``` ``` +-+-+ | x | q[x] | +-+-+ | \"dev\" | \"dev\" | | \"prod\" | \"prod\" | | \"smoke1\" | \"smoke1\" | +-+-+``` On the other hand, if we evaluate q with an input value for name we can determine whether name exists in the document defined by q: ``` q[\"dev\"]``` ``` \"dev\"``` Variables appearing in the head of a rule must also appear in a non-negated equality expression within the same rule. This property ensures that if the rule is evaluated and all of the expressions evaluate to true for some set of variable bindings, the variable in the head of the rule will be defined. References are used to access nested documents. The examples in this section use the data defined in the Examples section. The simplest reference contains no variables. For example, the following reference returns the hostname of the second server in the first site document from our example data: ``` sites[0].servers[1].hostname``` ``` \"helium\"``` References are typically written using the dot-access style. The canonical form does away with . and closely resembles dictionary lookup in a language such as Python: ``` sites``` ``` \"helium\"``` Both forms are valid, however, the dot-access style is typically more readable. Note that there are four cases where brackets must be used: The prefix of a reference identifies the root document for that reference. In the example above this is sites. The root document may be: References can include variables as keys. References written this way are used to select a value from every element in a collection. The following reference will select the hostnames of all the servers in our example data: ``` sites[i].servers[j].hostname``` ``` ++++ | i | j | sites[i].servers[j].hostname | ++++ | 0 | 0 | \"hydrogen\" | | 0 | 1 | \"helium\" | | 0 | 2 | \"lithium\" | | 1 | 0 | \"beryllium\" | | 1 | 1 | \"boron\" | | 1 | 2 | \"carbon\" | | 2 | 0 | \"nitrogen\" | | 2 | 1 | \"oxygen\" | ++++``` Conceptually, this is the same as the following imperative (Python) code: ``` def hostnames(sites): result = [] for site in sites: for server in site.servers: result.append(server.hostname) return result ``` In the reference above, we effectively used variables named i and j to iterate the collections. If the variables are unused outside the reference, we prefer to replace them with an underscore (_) character. The reference above can be rewritten as: ``` sites[].servers[].hostname``` ``` ++ | sites[].servers[].hostname | ++ | \"hydrogen\" | | \"helium\" | | \"lithium\" | | \"beryllium\" | | \"boron\" | | \"carbon\" | | \"nitrogen\" | | \"oxygen\" | ++``` The underscore is special because it cannot be referred to by other parts of the rule, e.g., the other side of the expression, another expression, etc. The underscore can be thought of as a special iterator. Each time an underscore is specified, a new iterator is instantiated. Under the hood, OPA translates the _ character to a unique variable name that does not conflict with variables and rules that are in scope. References can include Composite Values as keys if the key is being used to refer into a set. Composite keys may not be used in refs for base data documents, they are only valid for references into virtual documents. This is useful for checking for the presence of composite values within a set, or extracting all values within a set matching some pattern. For example: ``` s := {[1, 2], [1, 4], [2, 6]}``` ``` s[[1, 2]]``` ``` [ 1, 2 ]``` ``` s[[1, x]]``` ``` ++--+ | x | s[[1, x]] | ++--+ | 2 | [1,2] | | 4 | [1,4] | ++--+``` Rules are often written in terms of multiple expressions that contain references to" }, { "data": "In the following example, the rule defines a set of arrays where each array contains an application name and a hostname of a server where the application is deployed. ``` appsandhostnames contains [name, hostname] if { some i, j, k name := apps[i].name server := apps[i].servers[_] sites[j].servers[k].name == server hostname := sites[j].servers[k].hostname }``` The result: ``` appsandhostnames[x]``` ``` +-+--+ | x | appsandhostnames[x] | +-+--+ | [\"mongodb\",\"oxygen\"] | [\"mongodb\",\"oxygen\"] | | [\"mysql\",\"carbon\"] | [\"mysql\",\"carbon\"] | | [\"mysql\",\"lithium\"] | [\"mysql\",\"lithium\"] | | [\"web\",\"beryllium\"] | [\"web\",\"beryllium\"] | | [\"web\",\"boron\"] | [\"web\",\"boron\"] | | [\"web\",\"helium\"] | [\"web\",\"helium\"] | | [\"web\",\"hydrogen\"] | [\"web\",\"hydrogen\"] | | [\"web\",\"nitrogen\"] | [\"web\",\"nitrogen\"] | +-+--+``` Dont worry about understanding everything in this example right now. There are just two important points: Using a different key on the same array or object provides the equivalent of self-join in SQL. For example, the following rule defines a document containing apps deployed on the same site as \"mysql\": ``` same_site contains apps[k].name if { some i, j, k apps[i].name == \"mysql\" server := apps[i].servers[_] server == sites[j].servers[_].name otherserver := sites[j].servers[].name server != other_server otherserver == apps[k].servers[] }``` The result: ``` same_site[x]``` ``` +-+--+ | x | same_site[x] | +-+--+ | \"web\" | \"web\" | +-+--+``` Comprehensions provide a concise way of building Composite Values from sub-queries. Like Rules, comprehensions consist of a head and a body. The body of a comprehension can be understood in exactly the same way as the body of a rule, that is, one or more expressions that must all be true in order for the overall body to be true. When the body evaluates to true, the head of the comprehension is evaluated to produce an element in the result. The body of a comprehension is able to refer to variables defined in the outer body. For example: ``` region := \"west\" names := [name | sites[i].region == region; name := sites[i].name]``` ``` +--+--+ | names | region | +--+--+ | [\"smoke\",\"dev\"] | \"west\" | +--+--+``` In the above query, the second expression contains an Array Comprehension that refers to the region variable. The region variable will be bound in the outer body. When a comprehension refers to a variable in an outer body, OPA will reorder expressions in the outer body so that variables referred to in the comprehension are bound by the time the comprehension is evaluated. Comprehensions are similar to the same constructs found in other languages like Python. For example, we could write the above comprehension in Python as follows: ``` names = [site.name for site in sites if site.region == \"west\"] ``` Comprehensions are often used to group elements by some key. A common use case for comprehensions is to assist in computing aggregate values (e.g., the number of containers running on a host). Array Comprehensions build array values out of sub-queries. Array Comprehensions have the form: ``` [ <term> | <body> ] ``` For example, the following rule defines an object where the keys are application names and the values are hostnames of servers where the application is deployed. The hostnames of servers are represented as an array. ``` apptohostnames[app_name] := hostnames if { app := apps[_] app_name := app.name hostnames := [hostname | name := app.servers[_] s := sites[].servers[] s.name == name hostname := s.hostname] }``` The result: ``` apptohostnames[app]``` ``` +--++ | app | apptohostnames[app] | +--++ | \"mongodb\" | [\"oxygen\"] | | \"mysql\" | [\"lithium\",\"carbon\"] | | \"web\" | [\"hydrogen\",\"helium\",\"beryllium\",\"boron\",\"nitrogen\"] | +--++``` Object Comprehensions build object values out of" }, { "data": "Object Comprehensions have the form: ``` { <key>: <term> | <body> } ``` We can use Object Comprehensions to write the rule from above as a comprehension instead: ``` apptohostnames := {app.name: hostnames | app := apps[_] hostnames := [hostname | name := app.servers[_] s := sites[].servers[] s.name == name hostname := s.hostname] }``` The result is the same: ``` apptohostnames[app]``` ``` +--++ | app | apptohostnames[app] | +--++ | \"mongodb\" | [\"oxygen\"] | | \"mysql\" | [\"lithium\",\"carbon\"] | | \"web\" | [\"hydrogen\",\"helium\",\"beryllium\",\"boron\",\"nitrogen\"] | +--++``` Object comprehensions are not allowed to have conflicting entries, similar to rules: ``` {\"foo\": y | z := [1, 2, 3]; y := z[_] }``` ``` 1 error occurred: \"foo\": evalconflicterror: object keys must be unique``` Set Comprehensions build set values out of sub-queries. Set Comprehensions have the form: ``` { <term> | <body> } ``` For example, to construct a set from an array: ``` a := [1, 2, 3, 4, 3, 4, 3, 4, 5] b := {x | x = a[_]}``` ``` a b``` ``` ++-+ | a | b | ++-+ | [1,2,3,4,3,4,3,4,5] | [1,2,3,4,5] | ++-+``` Rules define the content of Virtual Documents in OPA. When OPA evaluates a rule, we say OPA generates the content of the document that is defined by the rule. The sample code in this section make use of the data defined in Examples. Rule definitions can be more expressive when using the future keywords contains and if which will become standard in OPA v1.0. To follow along as-is, please import the keywords, or preferably, import rego.v1: ``` import rego.v1``` See the docs on future keywords for more information. The following rule defines a set containing the hostnames of all servers: ``` hostnames contains name if { name := sites[].servers[].hostname }``` When we query for the content of hostnames we see the same data as we would if we queried using the sites[].servers[].hostname reference directly: ``` hostnames[name]``` ``` +-+--+ | name | hostnames[name] | +-+--+ | \"beryllium\" | \"beryllium\" | | \"boron\" | \"boron\" | | \"carbon\" | \"carbon\" | | \"helium\" | \"helium\" | | \"hydrogen\" | \"hydrogen\" | | \"lithium\" | \"lithium\" | | \"nitrogen\" | \"nitrogen\" | | \"oxygen\" | \"oxygen\" | +-+--+``` This example introduces a few important aspects of Rego. First, the rule defines a set document where the contents are defined by the variable name. We know this rule defines a set document because the head only includes a key. All rules have the following form (where key, value, and body are all optional): ``` <name> <key>? <value>? <body>? ``` For a more formal definition of the rule syntax, see the Policy Reference document. Second, the sites[].servers[].hostname fragment selects the hostname attribute from all the objects in the servers collection. From reading the fragment in isolation we cannot tell whether the fragment refers to arrays or objects. We only know that it refers to a collections of values. Third, the name := sites[].servers[].hostname expression binds the value of the hostname attribute to the variable name, which is also declared in the head of the rule. Rules that define objects are very similar to rules that define sets. ``` appsbyhostname[hostname] := app if { some i server := sites[].servers[] hostname := server.hostname apps[i].servers[_] == server.name app := apps[i].name }``` The rule above defines an object that maps hostnames to app names. The main difference between this rule and one which defines a set is the rule head: in addition to declaring a key, the rule head also declares a value for the document. The result: ``` appsbyhostname[\"helium\"]``` ``` \"web\"``` Using the (future) keyword if is optional" }, { "data": "The same rule can be defined as follows: ``` appsbyhostname[hostname] := app { some i server := sites[].servers[] hostname := server.hostname apps[i].servers[_] == server.name app := apps[i].name }``` A rule may be defined multiple times with the same name. When a rule is defined this way, we refer to the rule definition as incremental because each definition is additive. The document produced by incrementally defined rules is the union of the documents produced by each individual rule. For example, we can write a rule that abstracts over our servers and containers data as instances: ``` instances contains instance if { server := sites[].servers[] instance := {\"address\": server.hostname, \"name\": server.name} } instances contains instance if { container := containers[_] instance := {\"address\": container.ipaddress, \"name\": container.name} }``` If the head of the rule is same, we can chain multiple rule bodies together to obtain the same result. We dont recommend using this form anymore. ``` instances contains instance if { server := sites[].servers[] instance := {\"address\": server.hostname, \"name\": server.name} } { container := containers[_] instance := {\"address\": container.ipaddress, \"name\": container.name} }``` An incrementally defined rule can be intuitively understood as <rule-1> OR <rule-2> OR ... OR <rule-N>. The result: ``` instances[x]``` ``` +--+--+ | x | instances[x] | +--+--+ | {\"address\":\"10.0.0.1\",\"name\":\"bigstallman\"} | {\"address\":\"10.0.0.1\",\"name\":\"bigstallman\"} | | {\"address\":\"10.0.0.2\",\"name\":\"crankyeuclid\"} | {\"address\":\"10.0.0.2\",\"name\":\"crankyeuclid\"} | | {\"address\":\"beryllium\",\"name\":\"web-1000\"} | {\"address\":\"beryllium\",\"name\":\"web-1000\"} | | {\"address\":\"boron\",\"name\":\"web-1001\"} | {\"address\":\"boron\",\"name\":\"web-1001\"} | | {\"address\":\"carbon\",\"name\":\"db-1000\"} | {\"address\":\"carbon\",\"name\":\"db-1000\"} | | {\"address\":\"helium\",\"name\":\"web-1\"} | {\"address\":\"helium\",\"name\":\"web-1\"} | | {\"address\":\"hydrogen\",\"name\":\"web-0\"} | {\"address\":\"hydrogen\",\"name\":\"web-0\"} | | {\"address\":\"lithium\",\"name\":\"db-0\"} | {\"address\":\"lithium\",\"name\":\"db-0\"} | | {\"address\":\"nitrogen\",\"name\":\"web-dev\"} | {\"address\":\"nitrogen\",\"name\":\"web-dev\"} | | {\"address\":\"oxygen\",\"name\":\"db-dev\"} | {\"address\":\"oxygen\",\"name\":\"db-dev\"} | +--+--+``` In addition to rules that partially define sets and objects, Rego also supports so-called complete definitions of any type of document. Rules provide a complete definition by omitting the key in the head. Complete definitions are commonly used for constants: ``` pi := 3.14159``` Documents produced by rules with complete definitions can only have one value at a time. If evaluation produces multiple values for the same document, an error will be returned. For example: ``` user := \"bob\" power_users := {\"alice\", \"bob\", \"fred\"} restricted_users := {\"bob\", \"kim\"} maxmemory := 32 if powerusers[user] maxmemory := 4 if restrictedusers[user]``` Error: ``` 1 error occurred: module.rego:16: evalconflicterror: complete rules must not produce multiple outputs``` OPA returns an error in this case because the rule definitions are in conflict. The value produced by max_memory cannot be 32 and 4 at the same time. The documents produced by rules with complete definitions may still be undefined: ``` max_memory with user as \"johnson\"``` ``` undefined decision``` In some cases, having an undefined result for a document is not desirable. In those cases, policies can use the Default Keyword to provide a fallback value. As a shorthand for defining nested rule structures, its valid to use references as rule heads: ``` fruit.apple.seeds := 12 fruit.orange.color := \"orange\"``` This module defines two complete rules, data.example.fruit.apple.seeds and data.example.fruit.orange.color: ``` data.example``` ``` { \"fruit\": { \"apple\": { \"seeds\": 12 }, \"orange\": { \"color\": \"orange\" } } }``` Any term, except the very first, in a rule heads reference can be a variable. These variables can be assigned within the rule, just as for any other partial rule, to dynamically construct a nested collection of objects. Input: ``` { \"users\": [ { \"id\": \"alice\", \"role\": \"employee\", \"country\": \"USA\" }, { \"id\": \"bob\", \"role\": \"customer\", \"country\": \"USA\" }, { \"id\": \"dora\", \"role\": \"admin\", \"country\": \"Sweden\" } ], \"admins\": [ { \"id\": \"charlie\" } ] }``` Module: ``` package example import rego.v1 usersbyrole := user if { some user in input.users id := user.id role := user.role }" }, { "data": ":= user if { some user in input.admins id := user.id } usersbycountry[country] contains user.id if { some user in input.users country := user.country }``` Output: ``` { \"usersbycountry\": { \"Sweden\": [ \"dora\" ], \"USA\": [ \"alice\", \"bob\" ] }, \"usersbyrole\": { \"admin\": { \"charlie\": { \"id\": \"charlie\" }, \"dora\": { \"country\": \"Sweden\", \"id\": \"dora\", \"role\": \"admin\" } }, \"customer\": { \"bob\": { \"country\": \"USA\", \"id\": \"bob\", \"role\": \"customer\" } }, \"employee\": { \"alice\": { \"country\": \"USA\", \"id\": \"alice\", \"role\": \"employee\" } } } }``` The first variable declared in a rule heads reference divides the reference in a leading constant portion and a trailing dynamic portion. Other rules are allowed to overlap with the dynamic portion (dynamic extent) without causing a compile-time conflict. ``` package example import rego.v1 p[x].r := y if { x := \"q\" y := 1 } p.q.r := 2``` Error: ``` 1 error occurred: module.rego:12: evalconflicterror: object keys must be unique``` In the above example, rule R2 overlaps with the dynamic portion of rule R1s reference ([x].r), which is allowed at compile-time, as these rules arent guaranteed to produce conflicting output. However, as R1 defines x as \"q\" and y as 1, a conflict will be reported at evaluation-time. Conflicts are detected at compile-time, where possible, between rules even if they are within the dynamic extent of another rule. ``` package example import rego.v1 p[x].r := y if { x := \"foo\" y := 1 } p.q.r := 2 p.q.r.s := 3``` Error: ``` 1 error occurred: module.rego:12: regotypeerror: rule data.example.p.q.r conflicts with [data.example.p.q.r.s]``` Above, R2 and R3 are within the dynamic extent of R1, but are in conflict with each other, which is detected at compile-time. Rules are not allowed to overlap with object values of other rules. ``` package example import rego.v1 p.q.r := {\"s\": 1} p[x].r.t := 2 if { x := \"q\" }``` Error: ``` 1 error occurred: module.rego:6: evalconflicterror: object keys must be unique``` In the above example, R1 is within the dynamic extent of R2 and a conflict cannot be detected at compile-time. However, at evaluation-time R2 will attempt to inject a value under key t in an object value defined by R1. This is a conflict, as rules are not allowed to modify or replace values defined by other rules. We wont get a conflict if we update the policy to the following: ``` package example import rego.v1 p.q.r.s := 1 p[x].r.t := 2 if { x := \"q\" }``` As R1 is now instead defining a value within the dynamic extent of R2s reference, which is allowed: ``` { \"p\": { \"q\": { \"r\": { \"s\": 1, \"t\": 2 } } } }``` Rego supports user-defined functions that can be called with the same semantics as Built-in Functions. They have access to both the the data Document and the input Document. For example, the following function will return the result of trimming the spaces from a string and then splitting it by periods. ``` trimandsplit(s) := x if { t := trim(s, \" \") x := split(t, \".\") }``` ``` trimandsplit(\" foo.bar \")``` ``` [ \"foo\", \"bar\" ]``` Functions may have an arbitrary number of inputs, but exactly one output. Function arguments may be any kind of term. For example, suppose we have the following function: ``` foo([x, {\"bar\": y}]) := z if { z := {x: y} }``` The following calls would produce the logical mappings given: | Call | x | y | |:-|:--|:--| | z := foo(a) | a[0] |" }, { "data": "| | z := foo([\"5\", {\"bar\": \"hello\"}]) | \"5\" | \"hello\" | | z := foo([\"5\", {\"bar\": [1, 2, 3, [\"foo\", \"bar\"]]}]) | \"5\" | [1, 2, 3, [\"foo\", \"bar\"]] | If you need multiple outputs, write your functions so that the output is an array, object or set containing your results. If the output term is omitted, it is equivalent to having the output term be the literal true. Furthermore, if can be used to write shorter definitions. That is, the function declarations below are equivalent: ``` f(x) if { x == \"foo\" } f(x) if x == \"foo\" f(x) := true if { x == \"foo\" } f(x) := true if x == \"foo\"``` The outputs of user functions have some additional limitations, namely that they must resolve to a single value. If you write a function that has multiple possible bindings for an output variable, you will get a conflict error: ``` p(x) := y if { y := x[_] }``` ``` p([1, 2, 3])``` ``` 1 error occurred: module.rego:4: evalconflicterror: functions must not produce multiple outputs for same inputs``` It is possible in Rego to define a function more than once, to achieve a conditional selection of which function to execute: Functions can be defined incrementally. ``` q(1, x) := y if { y := x } q(2, x) := y if { y := x*4 }``` ``` q(1, 2)``` ``` 2``` ``` q(2, 2)``` ``` 8``` A given function call will execute all functions that match the signature given. If a call matches multiple functions, they must produce the same output, or else a conflict error will occur: ``` r(1, x) := y if { y := x } r(x, 2) := y if { y := x*4 }``` ``` r(1, 2)``` ``` 1 error occurred: module.rego:4: evalconflicterror: functions must not produce multiple outputs for same inputs``` On the other hand, if a call matches no functions, then the result is undefined. ``` s(x, 2) := y if { y := x * 4 }``` ``` s(5, 2)``` ``` 20``` ``` s(5, 3)``` ``` undefined decision``` Rego does not currently support the overloading of functions by the number of parameters. If two function definitions are given with the same function name but different numbers of parameters, a compile-time type error is generated. ``` r(x) := result if { result := 2*x } r(x, y) := result if { result := 2x + 3y }``` ``` 1 error occurred: module.rego:4: regotypeerror: conflicting rules data.example.r found``` The error can be avoided by using different function names. ``` r_1(x) := result if { result := 2*x } r_2(x, y) := result if { result := 2x + 3y }``` ``` [ r_1(10), r_2(10, 1) ]``` ``` [ 20, 23 ]``` In the unusual case that it is critical to use the same name, the function could be made to take the list of parameters as a single array. However, this approach is not generally recommended because it sacrifices some helpful compile-time checking and can be quite error-prone. ``` r(params) := result if { count(params) == 1 result := 2*params[0] } r(params) := result if { count(params) == 2 result := 2params[0] + 3params[1] }``` ``` [ r([10]), r([10, 1]) ]``` ``` [ 20, 23 ]``` To generate the content of a Virtual Document, OPA attempts to bind variables in the body of the rule such that all expressions in the rule evaluate to True. This generates the correct result when the expressions represent assertions about what states should exist in the data stored in" }, { "data": "In some cases, you want to express that certain states should not exist in the data stored in OPA. In these cases, negation must be used. For safety, a variable appearing in a negated expression must also appear in another non-negated equality expression in the rule. OPA will reorder expressions to ensure that negated expressions are evaluated after other non-negated expressions with the same variables. OPA will reject rules containing negated expressions that do not meet the safety criteria described above. The simplest use of negation involves only scalar values or variables and is equivalent to complementing the operator: ``` t if { greeting := \"hello\" not greeting == \"goodbye\" }``` The result: ``` t``` ``` true``` Negation is required to check whether some value does not exist in a collection: not p[\"foo\"]. That is not the same as complementing the == operator in an expression p[] == \"foo\" which yields p[] != \"foo\" which means for any item in p, return true if the item is not \"foo\". See more details here. For example, we can write a rule that defines a document containing names of apps not deployed on the \"prod\" site: ``` prod_servers contains name if { some site in sites site.name == \"prod\" some server in site.servers name := server.name } appsinprod contains name if { some site in sites some app in apps name := app.name some server in app.servers prod_servers[server] } appsnotin_prod contains name if { some app in apps name := app.name not appsinprod[name] }``` The result: ``` appsnotin_prod[name]``` ``` +--++ | name | appsnotin_prod[name] | +--++ | \"mongodb\" | \"mongodb\" | +--++``` Rego allows for several ways to express universal quantification. For example, imagine you want to express a policy that says (in English): ``` There must be no apps named \"bitcoin-miner\". ``` The most expressive way to state this in Rego is using the every keyword: ``` nobitcoinminersusingevery if { every app in apps { app.name != \"bitcoin-miner\" } }``` Variables in Rego are existentially quantified by default: when you write ``` array := [\"one\", \"two\", \"three\"]; array[i] == \"three\"``` The query will be satisfied if there is an i such that the querys expressions are simultaneously satisfied. ``` +--++ | array | i | +--++ | [\"one\",\"two\",\"three\"] | 2 | +--++``` Therefore, there are other ways to express the desired policy. For this policy, you can also define a rule that finds if there exists a bitcoin-mining app (which is easy using the some keyword). And then you use negation to check that there is NO bitcoin-mining app. Technically, youre using 2 negations and an existential quantifier, which is logically the same as a universal quantifier. For example: ``` nobitcoinminersusingnegation if not anybitcoinminers anybitcoinminers if { some app in apps app.name == \"bitcoin-miner\" }``` ``` nobitcoinminersusingnegation with apps as [{\"name\": \"web\"}]``` ``` true``` ``` nobitcoinminersusingnegation with apps as [{\"name\": \"bitcoin-miner\"}, {\"name\": \"web\"}]``` ``` undefined decision``` A common mistake is to try encoding the policy with a rule named nobitcoinminers like so: ``` nobitcoinminers if { app := apps[_] app.name != \"bitcoin-miner\" # THIS IS NOT CORRECT. }``` It becomes clear that this is incorrect when you use the some keyword, because the rule is true whenever there is SOME app that is not a bitcoin-miner: ``` nobitcoinminers if { some app in apps app.name != \"bitcoin-miner\" }``` You can confirm this by querying the rule: ``` nobitcoinminers with apps as [{\"name\": \"bitcoin-miner\"}, {\"name\": \"web\"}]``` ``` true``` The reason the rule is incorrect is that variables in Rego are existentially" }, { "data": "This means that rule bodies and queries express FOR ANY and not FOR ALL. To express FOR ALL in Rego complement the logic in the rule body (e.g., != becomes ==) and then complement the check using negation (e.g., nobitcoinminers becomes not anybitcoinminers). Alternatively, we can implement the same kind of logic inside a single rule using Comprehensions. ``` nobitcoinminersusingcomprehension if { bitcoin_miners := {app | some app in apps; app.name == \"bitcoin-miner\"} count(bitcoin_miners) == 0 }``` Whether you use negation, comprehensions, or every to express FOR ALL is up to you. The every keyword should lend itself nicely to a rule formulation that closely follows how requirements are stated, and thus enhances your policys readability. The comprehension version is more concise than the negation variant, and does not require a helper rule while the negation version is more verbose but a bit simpler and allows for more complex ORs. In Rego, policies are defined inside modules. Modules consist of: Modules are typically represented in Unicode text and encoded in UTF-8. Comments begin with the # character and continue until the end of the line. Packages group the rules defined in one or more modules into a particular namespace. Because rules are namespaced they can be safely shared across projects. Modules contributing to the same package do not have to be located in the same directory. The rules defined in a module are automatically exported. That is, they can be queried under OPAs Data API provided the appropriate package is given. For example, given the following module: ``` package opa.examples pi := 3.14159``` The pi document can be queried via the Data API: ``` GET https://example.com/v1/data/opa/examples/pi HTTP/1.1 ``` Valid package names are variables or references that only contain string operands. For example, these are all valid package names: ``` package foo package foo.bar package foo.bar.baz package foo[\"bar.baz\"].qux ``` These are invalid package names: ``` package 1foo # not a variable package foo[1].bar # contains non-string operand ``` For more details see the language Grammar. Import statements declare dependencies that modules have on documents defined outside the package. By importing a document, the identifiers exported by that document can be referenced within the current module. All modules contain implicit statements which import the data and input documents. Modules use the same syntax to declare dependencies on Base and Virtual Documents. ``` package opa.examples import rego.v1 # uses 'in' and 'contains' and 'if' import data.servers http_servers contains server if { some server in servers \"http\" in server.protocols }``` Similarly, modules can declare dependencies on query arguments by specifying an import path that starts with input. ``` package opa.examples import rego.v1 import input.user import input.method allow if user == \"alice\" allow if { user == \"bob\" method == \"GET\" } allow if { method == \"GET\" input.user in data.roles[\"dev\"] } allow if { user == \"catherine\" day := time.weekday(time.now_ns()) day in [\"Saturday\", \"Sunday\"] }``` Imports can include an optional as keyword to handle namespacing issues: ``` package opa.examples import rego.v1 import data.servers as my_servers http_servers contains server if { some server in my_servers \"http\" in server.protocols }``` To ensure backwards-compatibility, new keywords (like every) were introduced slowly. In the first stage, users could opt-in to using the new keywords via a special import: In OPA v1.0, the new keywords will become standard, and the import will become a no-op that can safely be removed. This should give all users ample time to update their policies, so that the new keyword will not cause clashes with existing variable names. This is the list of all future keywords that will become standard in OPA" }, { "data": "More expressive membership and existential quantification keyword: ``` deny { some x in input.roles # iteration x == \"denylisted-role\" } deny { \"denylisted-role\" in input.roles # membership check }``` in was introduced in v0.34.0. See the keywords docs for details. Expressive universal quantification keyword: ``` allowed := {\"customer\", \"admin\"} allow { every role in input.roles { role.name in allowed } }``` There is no need to also import future.keywords.in, that is implied by importing future.keywords.every. every was introduced in v0.38.0. See Every Keyword for details. This keyword allows more expressive rule heads: ``` deny if input.token != \"secret\"``` if was introduced in v0.42.0. This keyword allows more expressive rule heads for partial set rules: ``` deny contains msg { msg := \"forbidden\" }``` contains was introduced in v0.42.0. The some keyword allows queries to explicitly declare local variables. Use the some keyword in rules that contain unification statements or references with variable operands if variables contained in those statements are not declared using := . | Statement | Example | Variables | |:|:-|:| | Unification | input.a = [[\"b\", x], [y, \"c\"]] | x and y | | Reference with variable operands | data.foo[i].bar[j] | i and j | For example, the following rule generates tuples of array indices for servers in the west region that contain db in their name. The first element in the tuple is the site index and the second element is the server index. ``` tuples contains [i, j] if { some i, j sites[i].region == \"west\" server := sites[i].servers[j] # note: 'server' is local because it's declared with := contains(server.name, \"db\") }``` If we query for the tuples we get two results: ``` tuples``` ``` [ [ 1, 2 ], [ 2, 1 ] ]``` Since we have declared i, j, and server to be local, we can introduce rules in the same package without affecting the result above: ``` i := 1``` If we had not declared i with the some keyword, introducing the i rule above would have changed the result of tuples because the i symbol in the body would capture the global value. Try removing some i, j and see what happens! The some keyword is not required but its recommended to avoid situations like the one above where introduction of a rule inside a package could change behaviour of other rules. For using the some keyword with iteration, see the documentation of the in operator. every is a future keyword and needs to be imported. import rego.v1 or, alternatively, import future.keywords.every introduces the every keyword described here. See the docs on future keywords for more information. ``` nameswithdev if { some site in sites site.name == \"dev\" every server in site.servers { endswith(server.name, \"-dev\") } }``` ``` nameswithdev``` ``` true``` The every keyword takes an (optional) key argument, a value argument, a domain, and a block of further queries, its body. The keyword is used to explicitly assert that its body is true for any element in the domain. It will iterate over the domain, bind its variables, and check that the body holds for those bindings. If one of the bindings does not yield a successful evaluation of the body, the overall statement is undefined. If the domain is empty, the overall statement is true. Evaluating every does not introduce new bindings into the rule evaluation. Used with a key argument, the index, or property name (for objects), comes into the scope of the body evaluation: ``` array_domain if { every i, x in [1, 2, 3] { x-i == 1" }, { "data": "# array domain } object_domain if { every k, v in {\"foo\": \"bar\", \"fox\": \"baz\" } { # object domain startswith(k, \"f\") startswith(v, \"b\") } } set_domain if { every x in {1, 2, 3} { x != 4 } # set domain }``` ``` { \"array_domain\": true, \"object_domain\": true, \"set_domain\": true }``` Semantically, every x in xs { p(x) } is equivalent to, but shorter than, a not-some-not construct using a helper rule: ``` xs := [2, 2, 4, 8] largerthanone(x) := x > 1 rule_every if { every x in xs { largerthanone(x) } } notlessorequalone if not lte_one lte_one if { some x in xs not largerthanone(x) }``` ``` { \"notlessorequalone\": true, \"rule_every\": true, \"xs\": [ 2, 2, 4, 8 ] }``` Negating every is forbidden. If you desire to express not every x in xs { p(x) } please use some x in xs; not p(x) instead. The with keyword allows queries to programmatically specify values nested under the input Document or the data Document, or built-in functions. For example, given the simple authorization policy in the Imports section, we can write a query that checks whether a particular request would be allowed: ``` allow with input as {\"user\": \"alice\", \"method\": \"POST\"}``` ``` true``` ``` allow with input as {\"user\": \"bob\", \"method\": \"GET\"}``` ``` true``` ``` not allow with input as {\"user\": \"bob\", \"method\": \"DELETE\"}``` ``` true``` ``` allow with input as {\"user\": \"charlie\", \"method\": \"GET\"} with data.roles as {\"dev\": [\"charlie\"]}``` ``` true``` ``` not allow with input as {\"user\": \"charlie\", \"method\": \"GET\"} with data.roles as {\"dev\": [\"bob\"]}``` ``` true``` ``` allow with input as {\"user\": \"catherine\", \"method\": \"GET\"} with data.roles as {\"dev\": [\"bob\"]} with time.weekday as \"Sunday\"``` ``` true``` The with keyword acts as a modifier on expressions. A single expression is allowed to have zero or more with modifiers. The with keyword has the following syntax: ``` <expr> with <target-1> as <value-1> [with <target-2> as <value-2> [...]] ``` The <target>s must be references to values in the input document (or the input document itself) or data document, or references to functions (built-in or not). The with keyword only affects the attached expression. Subsequent expressions will see the unmodified value. The exception to this rule is when multiple with keywords are in-scope like below: ``` inner := [x, y] if { x := input.foo y := input.bar } middle := [a, b] if { a := inner with input.foo as 100 b := input } outer := result if { result := middle with input as {\"foo\": 200, \"bar\": 300} }``` When <target> is a reference to a function, like http.send, then its <value> can be any of the following: When the replacement value is a function, its arity needs to match the replaced functions arity; and the types must be compatible. Replacement functions can call the function theyre replacing without causing recursion. See the following example: ``` f(x) := count(x) mock_count(x) := 0 if \"x\" in x mock_count(x) := count(x) if not \"x\" in x``` ``` f([1, 2, 3]) with count as mock_count``` ``` 3``` ``` f([\"x\", \"y\", \"z\"]) with count as mock_count``` ``` 0``` Each replacement function evaluation will start a new scope: its valid to use with <builtin1> as ... in the body of the replacement function for example: ``` f(x) := count(x) if { ruleusingconcat with concat as \"foo,bar\" } mock_count(x) := 0 if \"x\" in x mock_count(x) := count(x) if not \"x\" in x ruleusingconcat if { concat(\",\", input.x) == \"foo,bar\" }``` ``` f([\"x\", \"y\", \"z\"]) with count as mock_count with" }, { "data": "as [\"baz\"]``` ``` 0``` Note that function replacement via with does not affect the evaluation of the function arguments: if input.x is undefined, the replacement of concat does not change the result of the evaluation: ``` count(input.x) with count as 3 with input.x as [\"x\"]``` ``` 3``` ``` count(input.x) with count as 3 with input as {}``` ``` undefined decision``` The default keyword allows policies to define a default value for documents produced by rules with Complete Definitions. The default value is used when all of the rules sharing the same name are undefined. For example: ``` default allow := false allow if { input.user == \"bob\" input.method == \"GET\" } allow if input.user == \"alice\"``` When the allow document is queried, the return value will be either true or false. ``` allow``` ``` { \"user\": \"bob\", \"method\": \"POST\" }``` ``` false``` Without the default definition, the allow document would simply be undefined for the same input. When the default keyword is used, the rule syntax is restricted to: ``` default <name> := <term> ``` The term may be any scalar, composite, or comprehension value but it may not be a variable or reference. If the value is a composite then it may not contain variables or references. Comprehensions however may, as the result of a comprehension is never undefined. Similar to rules, the default keyword can be applied to functions as well. For example: ``` default clamppositive() := 0 clamp_positive(x) := x if { x > 0 }``` When clamp_positive is queried, the return value will be either the argument provided to the function or 0. The value of a default function follows the same conditions as that of a default rule. In addition, a default function satisfies the following properties: The else keyword is a basic control flow construct that gives you control over rule evaluation order. Rules grouped together with the else keyword are evaluated until a match is found. Once a match is found, rule evaluation does not proceed to rules further in the chain. The else keyword is useful if you are porting policies into Rego from an order-sensitive system like IPTables. ``` authorize := \"allow\" if { input.user == \"superuser\" # allow 'superuser' to perform any operation. } else := \"deny\" if { input.path[0] == \"admin\" # disallow 'admin' operations... input.source_network == \"external\" # from external networks. } # ... more rules``` ``` authorize``` In the example below, evaluation stops immediately after the first rule even though the input matches the second rule as well. ``` { \"path\": [ \"admin\", \"exec_shell\" ], \"source_network\": \"external\", \"user\": \"superuser\" }``` ``` \"allow\"``` In the next example, the input matches the second rule (but not the first) so evaluation continues to the second rule before stopping. ``` { \"path\": [ \"admin\", \"exec_shell\" ], \"source_network\": \"external\", \"user\": \"alice\" }``` ``` \"deny\"``` The else keyword may be used repeatedly on the same rule and there is no limit imposed on the number of else clauses on a rule. To ensure backwards-compatibility, new keywords (like in) are introduced slowly. In the first stage, users can opt-in to using the new keywords via a special import: import rego.v1 or, alternatively, import future.keywords.in introduces the in keyword described here. See the docs on future keywords for more information. The membership operator in lets you check if an element is part of a collection (array, set, or object). It always evaluates to true or false: ``` p := [x, y, z] if { x := 3 in [1, 2, 3] # array y := 3 in {1, 2," }, { "data": "# set z := 3 in {\"foo\": 1, \"bar\": 3} # object }``` ``` { \"p\": [ true, true, true ] }``` When providing two arguments on the left-hand side of the in operator, and an object or an array on the right-hand side, the first argument is taken to be the key (object) or index (array), respectively: ``` p := [x, y] if { x := \"foo\", \"bar\" in {\"foo\": \"bar\"} # key, val with object y := 2, \"baz\" in [\"foo\", \"bar\", \"baz\"] # key, val with array }``` ``` { \"p\": [ true, true ] }``` Note that in list contexts, like set or array definitions and function arguments, parentheses are required to use the form with two left-hand side arguments compare: ``` p := x if { x := { 0, 2 in [2] } } q := x if { x := { (0, 2 in [2]) } } w := x if { x := g((0, 2 in [2])) } z := x if { x := f(0, 2 in [2]) } f(x, y) := sprintf(\"two function arguments: %v, %v\", [x, y]) g(x) := sprintf(\"one function argument: %v\", [x])``` ``` { \"p\": [ true, 0 ], \"q\": [ true ], \"w\": \"one function argument: true\", \"z\": \"two function arguments: 0, true\" }``` Combined with not, the operator can be handy when asserting that an element is not member of an array: ``` deny if not \"admin\" in input.user.roles test_deny if { deny with input.user.roles as [\"operator\", \"user\"] }``` ``` { \"test_deny\": true }``` Note that expressions using the in operator always return true or false, even when called in non-collection arguments: ``` q := x if { x := 3 in \"three\" }``` ``` { \"q\": false }``` Using the some variant, it can be used to introduce new variables based on a collections items: ``` p contains x if { some x in [\"a\", \"r\", \"r\", \"a\", \"y\"] } q contains x if { some x in {\"s\", \"e\", \"t\"} } r contains x if { some x in {\"foo\": \"bar\", \"baz\": \"quz\"} }``` ``` { \"p\": [ \"a\", \"r\", \"y\" ], \"q\": [ \"e\", \"s\", \"t\" ], \"r\": [ \"bar\", \"quz\" ] }``` Furthermore, passing a second argument allows you to work with object keys and array indices: ``` p contains x if { some x, \"r\" in [\"a\", \"r\", \"r\", \"a\", \"y\"] # key variable, value constant } q[x] := y if { some x, y in [\"a\", \"r\", \"r\", \"a\", \"y\"] # both variables } r[y] := x if { some x, y in {\"foo\": \"bar\", \"baz\": \"quz\"} }``` ``` { \"p\": [ 1, 2 ], \"q\": { \"0\": \"a\", \"1\": \"r\", \"2\": \"r\", \"3\": \"a\", \"4\": \"y\" }, \"r\": { \"bar\": \"foo\", \"quz\": \"baz\" } }``` Any argument to the some variant can be a composite, non-ground value: ``` p[x] = y if { some x, {\"foo\": y} in [{\"foo\": 100}, {\"bar\": 200}] } p[x] = y if { some {\"bar\": x}, {\"foo\": y} in {{\"bar\": \"b\"}: {\"foo\": \"f\"}} }``` ``` { \"p\": { \"0\": 100, \"b\": \"f\" } }``` Rego supports three kinds of equality: assignment (:=), comparison (==), and unification =. We recommend using assignment (:=) and comparison (==) whenever possible for policies that are easier to read and write. The assignment operator (:=) is used to assign values to variables. Variables assigned inside a rule are locally scoped to that rule and shadow global variables. ``` x := 100 p if { x :=" }, { "data": "# declare local variable 'x' and assign value 1 x != 100 # true because 'x' refers to local variable }``` Assigned variables are not allowed to appear before the assignment in the query. For example, the following policy will not compile: ``` p if { x != 100 x := 1 # error because x appears earlier in the query. } q if { x := 1 x := 2 # error because x is assigned twice. }``` ``` 2 errors occurred: module.rego:6: regocompileerror: var x referenced above module.rego:11: regocompileerror: var x assigned above``` A simple form of destructuring can be used to unpack values from arrays and assign them to variables: ``` address := [\"3 Abbey Road\", \"NW8 9AY\", \"London\", \"England\"] in_london if { [, , city, country] := address city == \"London\" country == \"England\" }``` ``` { \"address\": [ \"3 Abbey Road\", \"NW8 9AY\", \"London\", \"England\" ], \"in_london\": true }``` Comparison checks if two values are equal within a rule. If the left or right hand side contains a variable that has not been assigned a value, the compiler throws an error. ``` p if { x := 100 x == 100 # true because x refers to the local variable }``` ``` { \"p\": true }``` ``` y := 100 q if { y == 100 # true because y refers to the global variable }``` ``` { \"q\": true, \"y\": 100 }``` ``` r if { z == 100 # compiler error because z has not been assigned a value }``` ``` 1 error occurred: module.rego:5: regounsafevar_error: var z is unsafe``` Unification (=) combines assignment and comparison. Rego will assign variables to values that make the comparison true. Unification lets you ask for values for variables that make an expression true. ``` [x, \"world\"] = [\"hello\", y]``` ``` +++ | x | y | +++ | \"hello\" | \"world\" | +++``` ``` sites[i].servers[j].name = apps[k].servers[m]``` ``` +++++ | i | j | k | m | +++++ | 0 | 0 | 0 | 0 | | 0 | 1 | 0 | 1 | | 0 | 2 | 1 | 0 | | 1 | 0 | 0 | 2 | | 1 | 1 | 0 | 3 | | 1 | 2 | 1 | 1 | | 2 | 0 | 0 | 4 | | 2 | 1 | 2 | 0 | +++++``` As opposed to when assignment (:=) is used, the order of expressions in a rule does not affect the documents content. ``` s if { x > y y = 41 x = 42 }``` Here is a comparison of the three forms of equality. ``` Equality Applicable Compiler Errors Use Case -- -- - - := Everywhere Var already assigned Assign variable == Everywhere Var not assigned Compare values = Everywhere Values cannot be computed Express query ``` Best practice is to use assignment := and comparison == wherever possible. The additional compiler checks help avoid errors when writing policy, and the additional syntax helps make the intent clearer when reading policy. Under the hood := and == are syntactic sugar for =, local variable creation, and additional compiler checks. The following comparison operators are supported: ``` a == b # `a` is equal to `b`. a != b # `a` is not equal to `b`. a < b # `a` is less than `b`. a <= b # `a` is less than or equal to `b`. a > b # `a` is greater than `b`. a >=" }, { "data": "# `a` is greater than or equal to `b`.``` None of these operators bind variables contained in the expression. As a result, if either operand is a variable, the variable must appear in another expression in the same rule that would cause the variable to be bound, i.e., an equality expression or the target position of a built-in function. In some cases, rules must perform simple arithmetic, aggregation, and so on. Rego provides a number of built-in functions (or built-ins) for performing these tasks. Built-ins can be easily recognized by their syntax. All built-ins have the following form: ``` <name>(<arg-1>, <arg-2>, ..., <arg-n>) ``` Built-ins usually take one or more input values and produce one output value. Unless stated otherwise, all built-ins accept values or variables as output arguments. If a built-in function is invoked with a variable as input, the variable must be safe, i.e., it must be assigned elsewhere in the query. Built-ins can include . characters in the name. This allows them to be namespaced. If you are adding custom built-ins to OPA, consider namespacing them to avoid naming conflicts, e.g., org.example.special_func. See the Policy Reference document for details on each built-in function. By default, built-in function calls that encounter runtime errors evaluate to undefined (which can usually be treated as false) and do not halt policy evaluation. This ensures that built-in functions can be called with invalid inputs without causing the entire policy to stop evaluating. In most cases, policies do not have to implement any kind of error handling logic. If error handling is required, the built-in function call can be negated to test for undefined. For example: ``` allow if { io.jwt.verify_hs256(input.token, \"secret\") [, payload, ] := io.jwt.decode(input.token) payload.role == \"admin\" } reason contains \"invalid JWT supplied as input\" if { not io.jwt.decode(input.token) }``` ``` { \"token\": \"a poorly formatted token\" }``` ``` { \"reason\": [ \"invalid JWT supplied as input\" ] }``` If you wish to disable this behaviour and instead have built-in function call errors treated as exceptions that halt policy evaluation enable strict built-in errors in the caller: | API | Flag | |:--|:--| | POST v1/data (HTTP) | strict-builtin-errors query parameter | | GET v1/data (HTTP) | strict-builtin-errors query parameter | | opa eval (CLI) | --strict-builtin-errors | | opa run (REPL) | > strict-builtin-errors | | rego Go module | rego.StrictBuiltinErrors(true) option | | Wasm | Not Available | The rules below define the content of documents describing a simplistic deployment environment. These documents are referenced in other sections above. ``` sites := [ { \"region\": \"east\", \"name\": \"prod\", \"servers\": [ { \"name\": \"web-0\", \"hostname\": \"hydrogen\" }, { \"name\": \"web-1\", \"hostname\": \"helium\" }, { \"name\": \"db-0\", \"hostname\": \"lithium\" } ] }, { \"region\": \"west\", \"name\": \"smoke\", \"servers\": [ { \"name\": \"web-1000\", \"hostname\": \"beryllium\" }, { \"name\": \"web-1001\", \"hostname\": \"boron\" }, { \"name\": \"db-1000\", \"hostname\": \"carbon\" } ] }, { \"region\": \"west\", \"name\": \"dev\", \"servers\": [ { \"name\": \"web-dev\", \"hostname\": \"nitrogen\" }, { \"name\": \"db-dev\", \"hostname\": \"oxygen\" } ] } ] apps := [ { \"name\": \"web\", \"servers\": [\"web-0\", \"web-1\", \"web-1000\", \"web-1001\", \"web-dev\"] }, { \"name\": \"mysql\", \"servers\": [\"db-0\", \"db-1000\"] }, { \"name\": \"mongodb\", \"servers\": [\"db-dev\"] } ] containers := [ { \"image\": \"redis\", \"ipaddress\": \"10.0.0.1\", \"name\": \"big_stallman\" }, { \"image\": \"nginx\", \"ipaddress\": \"10.0.0.2\", \"name\": \"cranky_euclid\" } ]``` The package and individual rules in a module can be annotated with a rich set of metadata. ``` allow if { ... }``` Annotations are grouped within a metadata block, and must be specified as YAML within a comment block that must start with #" }, { "data": "Also, every line in the comment block containing the annotation must start at Column 1 in the module/file, or otherwise, they will be ignored. | Name | Type | Description | |:|:-|:-| | scope | string; one of package, rule, document, subpackages | The scope on which the schemas annotation is applied. Read more here. | | title | string | A human-readable name for the annotation target. Read more here. | | description | string | A description of the annotation target. Read more here. | | related_resources | list of URLs | A list of URLs pointing to related resources/documentation. Read more here. | | authors | list of strings | A list of authors for the annotation target. Read more here. | | organizations | list of strings | A list of organizations related to the annotation target. Read more here. | | schemas | list of object | A list of associations between value paths and schema definitions. Read more here. | | entrypoint | boolean | Whether or not the annotation target is to be used as a policy entrypoint. Read more here. | | custom | mapping of arbitrary data | A custom mapping of named parameters holding arbitrary data. Read more here. | Annotations can be defined at the rule or package level. The scope annotation in a metadata block determines how that metadata block will be applied. If the scope field is omitted, it defaults to the scope for the statement that immediately follows the annotation. The scope values that are currently supported are: Since the document scope annotation applies to all rules with the same name in the same package and the package and subpackages scope annotations apply to all packages with a matching path, metadata blocks with these scopes are applied over all files with applicable package- and rule paths. As there is no ordering across files in the same package, the document, package, and subpackages scope annotations can only be specified once per path. The document scope annotation can be applied to any rule in the set (i.e., ordering does not matter.) ``` allow if { x == 1 } allow if { x == 2 }``` The title annotation is a string value giving a human-readable name to the annotation target. ``` allow if { x == 1 } allow if { x == 2 }``` The description annotation is a string value describing the annotation target, such as its purpose. ``` allow if { ... }``` The related_resources annotation is a list of related-resource entries, where each links to some related external resource; such as RFCs and other reading material. A related-resource entry can either be an object or a short-form string holding a single URL. When a related-resource entry is presented as an object, it has two fields: When a related-resource entry is presented as a string, it needs to be a valid URL. ``` allow if { ... }``` ``` allow if { ... }``` The authors annotation is a list of author entries, where each entry denotes an author. An author entry can either be an object or a short-form string. When an author entry is presented as an object, it has two fields: At least one of the above fields are required for a valid author entry. When an author entry is presented as a string, it has the format { name } [ \"<\" email \">\"]; where the name of the author is a sequence of whitespace-separated" }, { "data": "Optionally, the last word may represent an email, if enclosed with <>. ``` allow if { ... }``` ``` allow if { ... }``` The organizations annotation is a list of string values representing the organizations associated with the annotation target. ``` allow if { ... }``` The schemas annotation is a list of key value pairs, associating schemas to data values. In-depth information on this topic can be found here. Schema files can be referenced by path, where each path starts with the schema namespace, and trailing components specify the path of the schema file (sans file-ending) relative to the root directory specified by the --schema flag on applicable commands. If the --schema flag is not present, referenced schemas are ignored during type checking. ``` allow if { access := data.acl[\"alice\"] access[_] == input.operation }``` Schema definitions can be inlined by specifying the schema structure as a YAML or JSON map. Inlined schemas are always used to inform type checking for the eval, check, and test commands; in contrast to by-reference schema annotations, which require the --schema flag to be present in order to be evaluated. ``` allow if { input.x == 42 }``` The entrypoint annotation is a boolean used to mark rules and packages that should be used as entrypoints for a policy. This value is false by default, and can only be used at rule or package scope. The build and eval CLI commands will automatically pick up annotated entrypoints; you do not have to specify them with --entrypoint. The custom annotation is a mapping of user-defined data, mapping string keys to arbitrarily typed values. ``` allow if { ... }``` In the example below, you can see how to access an annotation from within a policy. Given the input: ``` { \"number\": 11, \"subject\": { \"name\": \"John doe\", \"role\": \"customer\" } }``` The following policy ``` package example import rego.v1 output := decision if { input.number > 5 annotation := rego.metadata.rule() decision := { \"severity\": annotation.custom.severity, \"message\": annotation.description, } }``` will output ``` { \"output\": { \"message\": \"Numbers may not be higher than 5\", \"severity\": \"MEDIUM\" } }``` If youd like more examples and information on this, you can see more here under the Rego policy reference. Annotations can be listed through the inspect command by using the -a flag: ``` opa inspect -a ``` The ast.AnnotationSet is a collection of all ast.Annotations declared in a set of modules. An ast.AnnotationSet can be created from a slice of compiled modules: ``` var modules []*ast.Module ... as, err := ast.BuildAnnotationSet(modules) if err != nil { // Handle error. } ``` or can be retrieved from an ast.Compiler instance: ``` var modules []*ast.Module ... compiler := ast.NewCompiler() compiler.Compile(modules) as := compiler.GetAnnotationSet() ``` The ast.AnnotationSet can be flattened into a slice of ast.AnnotationsRef, which is a complete, sorted list of all annotations, grouped by the path and location of their targeted package or -rule. ``` flattened := as.Flatten() for _, entry := range flattened { fmt.Printf(\"%v at %v has annotations %v\\n\", entry.Path, entry.Location, entry.Annotations) } // Output: // data.foo at foo.rego:5 has annotations {\"scope\":\"subpackages\",\"organizations\":[\"Acme Corp.\"]} // data.foo.bar at mod:3 has annotations {\"scope\":\"package\",\"description\":\"A couple of useful rules\"} // data.foo.bar.p at mod:7 has annotations {\"scope\":\"rule\",\"title\":\"My Rule P\"} // // For modules: // # METADATA // # scope: subpackages // # organizations: // # - Acme Corp. // package foo // // # METADATA // # description: A couple of useful rules // package foo.bar // // # METADATA // # title: My Rule P // p := 7 ``` Given an ast.Rule, the" }, { "data": "can return the chain of annotations declared for that rule, and its path ancestry. The returned slice is ordered starting with the annotations for the rule, going outward to the farthest node with declared annotations in the rules path ancestry. ``` var rule *ast.Rule ... chain := ast.Chain(rule) for _, link := range chain { fmt.Printf(\"link at %v has annotations %v\\n\", link.Path, link.Annotations) } // Output: // data.foo.bar.p at mod:7 has annotations {\"scope\":\"rule\",\"title\":\"My Rule P\"} // data.foo.bar at mod:3 has annotations {\"scope\":\"package\",\"description\":\"A couple of useful rules\"} // data.foo at foo.rego:5 has annotations {\"scope\":\"subpackages\",\"organizations\":[\"Acme Corp.\"]} // // For modules: // # METADATA // # scope: subpackages // # organizations: // # - Acme Corp. // package foo // // # METADATA // # description: A couple of useful rules // package foo.bar // // # METADATA // # title: My Rule P // p := 7 ``` You can provide one or more input schema files and/or data schema files to opa eval to improve static type checking and get more precise error reports as you develop Rego code. The -s flag can be used to upload schemas for input and data documents in JSON Schema format. You can either load a single JSON schema file for the input document or directory of schema files. ``` -s, --schema string set schema file path or directory path ``` When a single file is passed, it is a schema file associated with the input document globally. This means that for all rules in all packages, the input has a type derived from that schema. There is no constraint on the name of the file, it could be anything. Example: ``` opa eval data.envoy.authz.allow -i opa-schema-examples/envoy/input.json -d opa-schema-examples/envoy/policy.rego -s opa-schema-examples/envoy/schemas/my-schema.json ``` When a directory path is passed, annotations will be used in the code to indicate what expressions map to what schemas (see below). Both input schema files and data schema files can be provided in the same directory, with different names. The directory of schemas may have any sub-directories. Notice that when a directory is passed the input document does not have a schema associated with it globally. This must also be indicated via an annotation. Example: ``` opa eval data.kubernetes.admission -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego -s opa-schema-examples/kubernetes/schemas ``` Schemas can also be provided for policy and data files loaded via opa eval --bundle Example: ``` opa eval data.kubernetes.admission -i opa-schema-examples/kubernetes/input.json -b opa-schema-examples/bundle.tar.gz -s opa-schema-examples/kubernetes/schemas ``` Samples provided at: https://github.com/aavarghese/opa-schema-examples/ Consider the following Rego code, which assumes as input a Kubernetes admission review. For resources that are Pods, it checks that the image name starts with a specific prefix. pod.rego ``` package kubernetes.admission import rego.v1 deny contains msg if { input.request.kind.kinds == \"Pod\" image := input.request.object.spec.containers[_].image not startswith(image, \"hooli.com/\") msg := sprintf(\"image '%v' comes from untrusted registry\", [image]) } ``` Notice that this code has a typo in it: input.request.kind.kinds is undefined and should have been input.request.kind.kind. Consider the following input document: input.json ``` { \"kind\": \"AdmissionReview\", \"request\": { \"kind\": { \"kind\": \"Pod\", \"version\": \"v1\" }, \"object\": { \"metadata\": { \"name\": \"myapp\" }, \"spec\": { \"containers\": [ { \"image\": \"nginx\", \"name\": \"nginx-frontend\" }, { \"image\": \"mysql\", \"name\": \"mysql-backend\" } ] } } } } ``` Clearly there are 2 image names that are in violation of the policy. However, when we evaluate the erroneous Rego code against this input we obtain: ``` % opa eval data.kubernetes.admission --format pretty -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego [] ``` The empty value returned is indistinguishable from a situation where the input did not violate the policy. This error is therefore causing the policy not to catch violating inputs" }, { "data": "If we fix the Rego code and change input.request.kind.kinds to input.request.kind.kind, then we obtain the expected result: ``` [ \"image 'nginx' comes from untrusted registry\", \"image 'mysql' comes from untrusted registry\" ] ``` With this feature, it is possible to pass a schema to opa eval, written in JSON Schema. Consider the admission review schema provided at: https://github.com/aavarghese/opa-schema-examples/blob/main/kubernetes/schemas/input.json We can pass this schema to the evaluator as follows: ``` % opa eval data.kubernetes.admission --format pretty -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego -s opa-schema-examples/kubernetes/schemas/input.json ``` With the erroneous Rego code, we now obtain the following type error: ``` 1 error occurred: ../../aavarghese/opa-schema-examples/kubernetes/policy.rego:5: regotypeerror: undefined ref: input.request.kind.kinds input.request.kind.kinds ^ have: \"kinds\" want (one of): [\"kind\" \"version\"] ``` This indicates the error to the Rego developer right away, without having the need to observe the results of runs on actual data, thereby improving productivity. When passing a directory of schemas to opa eval, schema annotations become handy to associate a Rego expression with a corresponding schema within a given scope: ``` allow if { ... } ``` See the annotations documentation for general information relating to annotations. The schemas field specifies an array associating schemas to data values. Paths must start with input or data (i.e., they must be fully-qualified.) The type checker derives a Rego Object type for the schema and an appropriate entry is added to the type environment before type checking the rule. This entry is removed upon exit from the rule. Example: Consider the following Rego code which checks if an operation is allowed by a user, given an ACL data document: ``` package policy import rego.v1 import data.acl default allow := false allow if { access := data.acl.alice access[_] == input.operation } allow if { access := data.acl.bob access[_] == input.operation } ``` Consider a directory named mySchemasDir with the following structure, provided via opa eval --schema opa-schema-examples/mySchemasDir ``` mySchemasDir/ input.json acl-schema.json ``` For actual code samples, see https://github.com/aavarghese/opa-schema-examples/tree/main/acl. In the first allow rule above, the input document has the schema input.json, and data.acl has the schema acl-schema.json. Note that we use the relative path inside the mySchemasDir directory to identify a schema, omit the .json suffix, and use the global variable schema to stand for the top-level of the directory. Schemas in annotations are proper Rego references. So schema.input is also valid, but schema.acl-schema is not. If we had the expression data.acl.foo in this rule, it would result in a type error because the schema contained in acl-schema.json only defines object properties \"alice\" and \"bob\" in the ACL data document. On the other hand, this annotation does not constrain other paths under data. What it says is that we know the type of data.acl statically, but not that of other paths. So for example, data.foo is not a type error and gets assigned the type Any. Note that the second allow rule doesnt have a METADATA comment block attached to it, and hence will not be type checked with any schemas. On a different note, schema annotations can also be added to policy files part of a bundle package loaded via opa eval --bundle along with the --schema parameter for type checking a set of *.rego policy files. The scope of the schema annotation can be controlled through the scope annotation In case of overlap, schema annotations override each other as follows: ``` rule overrides document document overrides package package overrides subpackages ``` The following sections explain how the different scopes affect schema annotation overriding for type" }, { "data": "In the example above, the second rule does not include an annotation so type checking of the second rule would not take schemas into account. To enable type checking on the second (or other rules in the same file) we could specify the annotation multiple times: ``` allow if { access := data.acl[\"alice\"] access[_] == input.operation } allow if { access := data.acl[\"bob\"] access[_] == input.operation } ``` This is obviously redundant and error-prone. To avoid this problem, we can define the annotation once on a rule with scope document: ``` allow if { access := data.acl[\"alice\"] access[_] == input.operation } allow if { access := data.acl[\"bob\"] access[_] == input.operation } ``` In this example, the annotation with document scope has the same affect as the two rule scoped annotations in the previous example. Annotations can be defined at the package level and then applied to all rules within the package: ``` package example import rego.v1 allow if { access := data.acl[\"alice\"] access[_] == input.operation } allow if { access := data.acl[\"bob\"] access[_] == input.operation } ``` package scoped schema annotations are useful when all rules in the same package operate on the same input structure. In some cases, when policies are organized into many sub-packages, it is useful to declare schemas recursively for them using the subpackages scope. For example: ``` package kubernetes.admission ``` This snippet would declare the top-level schema for input for the kubernetes.admission package as well as all subpackages. If admission control rules were defined inside packages like kubernetes.admission.workloads.pods, they would be able to pick up that one schema declaration. JSON Schemas are often incomplete specifications of the format of data. For example, a Kubernetes Admission Review resource has a field object which can contain any other Kubernetes resource. A schema for Admission Review has a generic type object for that field that has no further specification. To allow more precise type checking in such cases, we support overriding existing schemas. Consider the following example: ``` package kubernetes.admission import rego.v1 deny contains msg if { input.request.kind.kind == \"Pod\" image := input.request.object.spec.containers[_].image not startswith(image, \"hooli.com/\") msg := sprintf(\"image '%v' comes from untrusted registry\", [image]) } ``` In this example, the input is associated with an Admission Review schema, and furthermore input.request.object is set to have the schema of a Kubernetes Pod. In effect, the second schema annotation overrides the first one. Overriding is a schema transformation feature and combines existing schemas. In this case, we are combining the Admission Review schema with that of a Pod. Notice that the order of schema annotations matter for overriding to work correctly. Given a schema annotation, if a prefix of the path already has a type in the environment, then the annotation has the effect of merging and overriding the existing type with the type derived from the schema. In the example above, the prefix input already has a type in the type environment, so the second annotation overrides this existing type. Overriding affects the type of the longest prefix that already has a type. If no such prefix exists, the new path and type are added to the type environment for the scope of the rule. In general, consider the existing Rego type: ``` object{a: object{b: object{c: C, d: D, e: E}}} ``` If we override this type with the following type (derived from a schema annotation of the form a.b.e: schema-for-E1): ``` object{a: object{b: object{e: E1}}} ``` It results in the following type: ``` object{a: object{b: object{c: C, d: D, e: E1}}} ``` Notice that b still has its fields c and d, so overriding has a merging effect as" }, { "data": "Moreover, the type of expression a.b.e is now E1 instead of E. We can also use overriding to add new paths to an existing type, so if we override the initial type with the following: ``` object{a: object{b: object{f: F}}} ``` we obtain the following type: ``` object{a: object{b: object{c: C, d: D, e: E, f: F}}} ``` We use schemas to enhance the type checking capability of OPA, and not to validate the input and data documents against desired schemas. This burden is still on the user and care must be taken when using overriding to ensure that the input and data provided are sensible and validated against the transformed schemas. It is sometimes useful to have different input schemas for different rules in the same package. This can be achieved as illustrated by the following example: ``` package policy import rego.v1 import data.acl default allow := false allow if { access := data.acl[input.user] access[_] == input.operation } whocan contains user if { access := acl[user] access[_] == input.operation } ``` The directory that is passed to opa eval is the following: ``` mySchemasDir/ input.json acl-schema.json whocan-input-schema.json ``` In this example, we associate the schema input.json with the input document in the rule allow, and the schema whocan-input-schema.json with the input document for the rule whocan. Rego has a gradual type system meaning that types can be partially known statically. For example, an object could have certain fields whose types are known and others that are unknown statically. OPA type checks what it knows statically and leaves the unknown parts to be type checked at runtime. An OPA object type has two parts: the static part with the type information known statically, and a dynamic part, which can be nil (meaning everything is known statically) or non-nil and indicating what is unknown. When we derive a type from a schema, we try to match what is known and unknown in the schema. For example, an object that has no specified fields becomes the Rego type Object{Any: Any}. However, currently additionalProperties and additionalItems are ignored. When a schema is fully specified, we derive a type with its dynamic part set to nil, meaning that we take a strict interpretation in order to get the most out of static type checking. This is the case even if additionalProperties is set to true in the schema. In the future, we will take this feature into account when deriving Rego types. When overriding existing types, the dynamicity of the overridden prefix is preserved. JSON Schema provides keywords such as anyOf and allOf to structure a complex schema. For anyOf, at least one of the subschemas must be true, and for allOf, all subschemas must be true. The type checker is able to identify such keywords and derive a more robust Rego type through more complex schemas. Specifically, anyOf acts as an Rego Or type where at least one (can be more than one) of the subschemas is true. Consider the following Rego and schema file containing anyOf: policy-anyOf.rego ``` package kubernetes.admission import rego.v1 deny if { input.request.servers.versions == \"Pod\" } ``` input-anyOf.json ``` { \"$schema\":" }, { "data": "\"type\": \"object\", \"properties\": { \"kind\": {\"type\": \"string\"}, \"request\": { \"type\": \"object\", \"anyOf\": [ { \"properties\": { \"kind\": { \"type\": \"object\", \"properties\": { \"kind\": {\"type\": \"string\"}, \"version\": {\"type\": \"string\" } } } } }, { \"properties\": { \"server\": { \"type\": \"object\", \"properties\": { \"accessNum\": {\"type\": \"integer\"}, \"version\": {\"type\": \"string\"} } } } } ] } } } ``` We can see that request is an object with two options as indicated by the choices under anyOf: The type checker finds the first error in the Rego code, suggesting that servers should be either kind or server. ``` input.request.servers.versions ^ have: \"servers\" want (one of): [\"kind\" \"server\"] ``` Once this is fixed, the second typo is highlighted, prompting the user to choose between accessNum and version. ``` input.request.server.versions ^ have: \"versions\" want (one of): [\"accessNum\" \"version\"] ``` Specifically, allOf keyword implies that all conditions under allOf within a schema must be met by the given data. allOf is implemented through merging the types from all of the JSON subSchemas listed under allOf before parsing the result to convert it to a Rego type. Merging of the JSON subSchemas essentially combines the passed in subSchemas based on what types they contain. Consider the following Rego and schema file containing allOf: policy-allOf.rego ``` package kubernetes.admission import rego.v1 deny if { input.request.servers.versions == \"Pod\" } ``` input-allof.json ``` { \"$schema\": \"http://json-schema.org/draft-07/schema\", \"type\": \"object\", \"properties\": { \"kind\": {\"type\": \"string\"}, \"request\": { \"type\": \"object\", \"allOf\": [ { \"properties\": { \"kind\": { \"type\": \"object\", \"properties\": { \"kind\": {\"type\": \"string\"}, \"version\": {\"type\": \"string\" } } } } }, { \"properties\": { \"server\": { \"type\": \"object\", \"properties\": { \"accessNum\": {\"type\": \"integer\"}, \"version\": {\"type\": \"string\"} } } } } ] } } } ``` We can see that request is an object with properties as indicated by the elements listed under allOf: The type checker finds the first error in the Rego code, suggesting that servers should be server. ``` input.request.servers.versions ^ have: \"servers\" want (one of): [\"kind\" \"server\"] ``` Once this is fixed, the second typo is highlighted, informing the user that versions should be one of accessNum or version. ``` input.request.server.versions ^ have: \"versions\" want (one of): [\"accessNum\" \"version\"] ``` Because the properties kind, version, and accessNum are all under the allOf keyword, the resulting schema that the given data must be validated against will contain the types contained in these properties children (string and integer). It is valid for JSON schemas to reference other JSON schemas via URLs, like this: ``` { \"description\": \"Pod is a collection of containers that can run on a host.\", \"type\": \"object\", \"properties\": { \"metadata\": { \"$ref\": \"https://kubernetesjsonschema.dev/v1.14.0/_definitions.json#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta\", \"description\": \"Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata\" } } } ``` OPAs type checker will fetch these remote references by default. To control the remote hosts schemas will be fetched from, pass a capabilities file to your opa eval or opa check call. Starting from the capabilities.json of your OPA version (which can be found in the repository), add an allow_net key to it: its values are the IP addresses or host names that OPA is supposed to connect to for retrieving remote schemas. ``` { \"builtins\": [ ... ], \"allow_net\": [ \"kubernetesjsonschema.dev\" ] } ``` To forbid all network access in schema checking, set allow_net to [] Host names are checked against the list as-is, so adding 127.0.0.1 to allow_net, and referencing a schema from http://localhost/ will fail. Metaschemas for different JSON Schema draft versions are not subject to this constraint, as they are already provided by OPAs schema checker without requiring network access. These are: Currently this feature admits schemas written in JSON Schema but does not support every feature available in this format. In particular the following features are not yet supported: A note of caution: overriding is a powerful capability that must be used carefully. For example, the user is allowed to write: ``` ``` In this case, we are overriding the root of all documents to have some" }, { "data": "Since all Rego code lives under data as virtual documents, this in practice renders all of them inaccessible (resulting in type errors). Similarly, assigning a schema to a package name is not a good idea and can cause problems. Care must also be taken when defining overrides so that the transformation of schemas is sensible and data can be validated against the transformed schema. For more examples, please see https://github.com/aavarghese/opa-schema-examples This contains samples for Envoy, Kubernetes, and Terraform including corresponding JSON Schemas. For a reference on JSON Schema please see: http://json-schema.org/understanding-json-schema/reference/index.html For a tool that generates JSON Schema from JSON samples, please see: https://jsonschema.net/home The Rego compiler supports strict mode, where additional constraints and safety checks are enforced during compilation. Compiler rules that will be enforced by future versions of OPA, but will be a breaking change once introduced, are incubated in strict mode. This creates an opportunity for users to verify that their policies are compatible with the next version of OPA before upgrading. Compiler Strict mode is supported by the check command, and can be enabled through the --strict/-S flag. ``` -S, --strict enable compiler strict mode ``` | Name | Description | Enforced by default in OPA version | |:|:-|-:| | Duplicate imports | Duplicate imports, where one import shadows another, are prohibited. | 1 | | Unused local assignments | Unused arguments or assignments local to a rule, function or comprehension are prohibited | nan | | Unused imports | Unused imports are prohibited. | nan | | input and data reserved keywords | input and data are reserved keywords, and may not be used as names for rules and variable assignment. | 1 | | Use of deprecated built-ins | Use of deprecated functions is prohibited, and these will be removed in OPA 1.0. Deprecated built-in functions: any, all, rematch, net.cidroverlap, setdiff, castarray, castset, caststring, castboolean, castnull, cast_object | 1 | If the rego.v1 import is present in a module, all strict mode checks documented above except the unused local assignment and unused imports checks are enforced on the module. Additionally the rego.v1 import also requires the usage of if and contains keywords when declaring certain rules. The if keyword is required before a rule body and the contains keyword is required for partial set rules. In the future, when OPA v1.0 is released, breaking changes will be introduced to the Rego language. The rego.v1 import is a way to opt-in to these breaking changes early, and ensure that your policies are compatible with OPA v1.0. If a module containing this import is not compatible with OPA v1.0, it will cause a compilation error. When a module imports rego.v1, the following features and constraints are implied: The rego.v1 import only affects the module where its declared. It does not affect any other modules, even if they are importing, or is imported by, a module where rego.v1 is declared. In OPA v1.0, the rego.v1 import will have no semantic impact on the policy, as all its implied features and constraints will be enforced by default. It will however still be a valid statement, and wont cause any compilation errors. Example policy that imports rego.v1 to be compatible with the future syntax in OPA v1.0: ``` package example import rego.v1 l := [1, 2, 3] default allow := false allow if { count(violations) == 0 } violations contains msg if { every x in l { x > 0 } msg := \"no negative entries\" }``` The 8 ecosystem projects related to this page can be found in the corresponding OPA Ecosystem section. Was this page helpful?" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "KubeArmor", "subcategory": "Security & Compliance" }
[ { "data": "KubeArmor is a cloud-native runtime security enforcement system that restricts the behavior (such as process execution, file access, and networking operations) of pods, containers, and nodes (VMs) at the system level. KubeArmor leverages Linux security modules (LSMs) such as AppArmor, SELinux, or BPF-LSM to enforce the user-specified policies. KubeArmor generates rich alerts/telemetry events with container/pod/namespace identities by leveraging eBPF. | 0 | 1 | |-:|-:| | nan | nan | | nan | nan | Getting Started Use Cases KubeArmor Support Matrix How is KubeArmor different? Security Policy for Pods/Containers [Spec] [Examples] Security Policy for Hosts/Nodes [Spec] [Examples] ... detailed documentation Contribution Guide Development Guide, Testing Guide Join KubeArmor Slack FAQs Zoom Link Minutes: Document Calendar invite: Google Calendar, ICS file KubeArmor uses Tracee's system call utility functions. KubeArmor roadmap is tracked via KubeArmor Projects Last updated 1 month ago Was this helpful? Harden Infrastructure Protect critical paths such as cert bundles MITRE, STIGs, CIS based rules Restrict access to raw DB table Least Permissive Access Process Whitelisting Network Whitelisting Control access to sensitive assets Application Behavior Process execs, File System accesses Service binds, Ingress, Egress connections Sensitive system call profiling Deployment Models Kubernetes Deployment Containerized Deployment VM/Bare-Metal Deployment KubeArmor is Sandbox Project of the Cloud Native Computing Foundation." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Kubewarden", "subcategory": "Security & Compliance" }
[ { "data": "Kubewarden is a Kubernetes Policy Engine. It aims to be the Universal Policy Engine for Kubernetes. Kubewarden also provides an audit scanner to actively and continuously check policy enforcement over time. Kubewarden is a CNCF Sandbox project, originally created by SUSE Rancher. As stated on WebAssembly's official website: WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications. Wasm was originally conceived as a browser \"extension\". However, efforts are being made by the WebAssembly community to allow the execution of Wasm code outside browsers. Users can write Kubernetes policies using their favorite programming language, provided its toolchain can generate Wasm binaries. Wasm modules are portable, once built they can run on any kind of processor architecture and operating system. For example, a policy developed and built on Apple Silicon can run on AMD64/Intel64 Linux without conversion. Policy authors can reuse their skills, tools and best practices. Policies are \"traditional\" programs that can have reusable blocks (regular libraries), can be linted and tested, and be plugged into current CI and CD workflows. Kubewarden policies can be served by a regular web server or, better, be published from an OCI compliant registry. Kubewarden policies can be stored inside an OCI compliant registry as OCI artifacts." } ]
{ "category": "Provisioning", "file_name": "getting-started.md#run-your-first-scan.md", "project_name": "Kubescape", "subcategory": "Security & Compliance" }
[ { "data": "Write your documentation in Markdown and create a professional static site in minutes searchable, customizable, in 60+ languages, for all devices. Focus on the content of your documentation and create a professional static site in minutes. No need to know HTML, CSS or JavaScript let Material for MkDocs do the heavy lifting for you. Serve your documentation with confidence Material for MkDocs automatically adapts to perfectly fit the available screen estate, no matter the type or size of the viewing device. Desktop. Tablet. Mobile. All great. Make it yours change the colors, fonts, language, icons, logo, and more with a few lines of configuration. Material for MkDocs can be easily extended and provides many options to alter appearance and behavior. Don't let your users wait get incredible value with a small footprint by using one of the fastest themes available with excellent performance, yielding optimal search engine rankings and happy users that return. Own your documentation's complete sources and outputs, guaranteeing both integrity and security no need to entrust the backbone of your product knowledge to third-party platforms. Retain full control. You're in good company choose a mature and actively maintained solution built with state-of-the-art Open Source technologies, trusted by more than 20.000 individuals and organizations. Licensed under MIT. Material for MkDocs makes your documentation instantly searchable with zero effort: say goodbye to costly third-party crawler-based solutions that can take hours to update. Ship your documentation with a highly customizable and blazing fast search running entirely in the user's browser at no extra cost. Even better: search inside code blocks, exclude specific sections or entire pages, boost important pages in the results and build searchable documentation that works offline. Learn more Some examples need more explanation than others, which is why Material for MkDocs offers a unique and elegant way to add rich text almost anywhere in a code block. Code annotations can host formatted text, images, diagrams, code blocks, call-outs, content tabs, even interactive elements basically everything that can be expressed in Markdown or HTML. Of course, code annotations work beautifully on mobile and other touch devices and can be printed. Learn more Make an impact on social media and increase engagement when sharing links to your documentation by leveraging the built-in social plugin. Material for MkDocs makes it effortless to generate a beautiful preview image for each page, which will drive more interested users to your Open Source or commercial project. While the social plugin uses what's already there, i.e. your project's name and logo, as well as each page's title and description, it's easy to customize preview images. Supercharge your technical writing by making better use of the processing power of the visual cortex: Material for MkDocs ships more than 10,000 icons and emojis, which can be used in Markdown and HTML with simple shortcodes and an easy-to-remember syntax. Add color to icons and animate them. Make it pop. Use our dedicated icon search to quickly find the perfect icon for almost every use case and add custom icon sets with minimal configuration. Get started By joining the Insiders program, you'll get immediate access to the latest features while also helping support the ongoing development of Material for MkDocs. Thanks to our awesome sponsors, this project is actively maintained and kept in good shape. Together, we can build documentation that simply works! Learn more Follow @squidfunk on Twitter Follow @squidfunk on Fosstodon Material for MkDocs on GitHub Material for MkDocs on DockerHub Material for MkDocs on PyPI" } ]
{ "category": "Provisioning", "file_name": "quick-start.md", "project_name": "Kubewarden", "subcategory": "Security & Compliance" }
[ { "data": "The Kubewarden stack comprises: Some Some Some A deployment of a kubewarden-controller: this controller monitors the The Kubernetes Custom Resource Definitions (CRDs) defined by Kubewarden are described here. The Helm chart depends on cert-manager. Ensure you install cert-manager before the kubewarden-controller chart. You can install the latest version of cert-manager through Helm by running the following commands: ``` helm repo add jetstack https://charts.jetstack.iohelm install --wait --namespace cert-manager --create-namespace \\ --set installCRDs=true cert-manager jetstack/cert-manager``` Kubewarden policies can be retrieved from the GitHub container registry at https://ghcr.io. You need authentication to use the repository with the Kubewarden CLI, a GitHub personal access token (PAT). Their documentation guides you through creating one if you haven't already done so. Then you authenticate with a command like: ``` echo $PAT | docker login ghcr.io --username <my-gh-username> --password-stdin``` Deploy the Kubewarden stack using helm charts as follows: ``` helm repo add kubewarden https://charts.kubewarden.io``` ``` helm repo update kubewarden``` Install the following Helm charts inside the kubewarden namespace in your Kubernetes cluster: kubewarden-crds, which registers the kubewarden-controller, which installs the Kubewarden controller and the audit scanner If you need to disable the audit scanner component check the audit scanner installation documentation page. kubewarden-defaults, which will create a PolicyServer resource named default. It can also install a set of recommended policies to secure your cluster by enforcing some well known best practices. ``` helm install --wait -n kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds``` ``` helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller``` ``` helm install --wait -n kubewarden kubewarden-defaults kubewarden/kubewarden-defaults``` Since v0.4.0, a PolicyServer resource named default will not be created using the kubewarden-controller chart. Now a Helm chart called kubewarden-defaults, installs the default policy server. This means that if you aren't using the latest version of the kubewarden-controller and are trying to upgrade or delete, your default policy server won't be upgraded or deleted. So, you might run into issues if you try to install the kubewarden-defaults with some conflicting information, for example, the same policy server name. To be able to take advantage of future upgrades in the kubewarden-defaults Helm chart remove the existing PolicyServer resource created by the kubewarden-controller before installing the new chart. Now you can update your policy server using Helm upgrades without resource conflicts. When you remove the PolicyServer, all the policies bound to it will be removed as well. The default configuration values are sufficient for most deployments. All options are documented here. Kubewarden has three main components which you will interact with: A Kubewarden PolicyServer is managed by the kubewarden-controller. Multiple A PolicyServer validates incoming requests by executing Kubewarden policies against them. This is the default PolicyServer configuration: ``` apiVersion: policies.kubewarden.io/v1kind: PolicyServermetadata: name: reserved-instance-for-tenant-aspec: image: ghcr.io/kubewarden/policy-server:v1.3.0 replicas: 2 serviceAccountName: ~ env: - name: KUBEWARDENLOGLEVEL value: debug``` Check the latest released PolicyServer version and change the tag to" }, { "data": "Overview of the attributes of the PolicyServer resource: | Required | Placeholder | Description | |:--|:-|:-| | Y | image | The name of the container image | | Y | replicas | The number of desired instances | | N | serviceAccountName | The name of the ServiceAccount to use for the PolicyServer deployment. If no value is provided, the default ServiceAccount from the namespace, where the kubewarden-controller is installed, will be used | | N | env | The list of environment variables | | N | annotations | The list of annotations | Changing any of these attributes causes a PolicyServer deployment with the new configuration. The Enforcing policies is the most common operation which a Kubernetes administrator performs. You can declare as many policies as you want, each targets one or more Kubernetes resources (that is, pods, Custom Resource and others). You also specify the type of operations applied to targeted resources. The operations available are CREATE, UPDATE, DELETE and CONNECT. Default ``` apiVersion: policies.kubewarden.io/v1kind: ClusterAdmissionPolicymetadata: name: psp-capabilitiesspec: policyServer: reserved-instance-for-tenant-a module: registry://ghcr.io/kubewarden/policies/psp-capabilities:v0.1.9 rules: - apiGroups: [\"\"] apiVersions: [\"v1\"] resources: [\"pods\"] operations: - CREATE - UPDATE mutating: true settings: allowedcapabilities: - CHOWN requireddropcapabilities: - NETADMIN``` Overview of the attributes of the | Required | Placeholder | Description | |:--|:--|:| | N | policy-server | Identifies an existing PolicyServer object. The policy will be served only by this PolicyServer instance. A ClusterAdmissionPolicyA ClusterAdmissionPolicy defines how policies evaluate requests. Glossary that doesn't have an explicit PolicyServer, will be served by the one named default | | Y | module | The location of the Kubewarden policy. The following schemes are allowed: | | N | nan | - registry: The policy is downloaded from an OCI artifacts compliant container registry. Example: registry://<OCI registry/policy URL> | | N | nan | - http, https: The policy is downloaded from a regular HTTP(s) server. Example: https://<website/policy URL> | | N | nan | - file: The policy is loaded from a file in the computer file system. Example: file:///<policy WASM binary full path> | | Y | resources | The Kubernetes resources evaluated by the policy | | Y | operations | What operations for the previously given types should be forwarded to this admission policy by the API server for evaluation. | | Y | mutating | A boolean value that must be set to true for policies that can mutate incoming requests | | N | settings | A free-form object that contains the policy configuration values | | N | failurePolicy | The action to take if the request evaluated by a policy results in an" }, { "data": "The following options are allowed: | | N | nan | - Ignore: an error calling the webhook is ignored and the API request is allowed to continue | | N | nan | - Fail: an error calling the webhook causes the admission to fail and the API request to be rejected | The The complete documentation of these Custom Resources can be found here or on docs.crds.dev. We will use the pod-privileged policy. We want to prevent the creation of privileged containers inside our Kubernetes cluster by enforcing this policy. Let's define a ``` kubectl apply -f - <<EOFapiVersion: policies.kubewarden.io/v1kind: ClusterAdmissionPolicymetadata: name: privileged-podsspec: module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.2.2 rules: - apiGroups: [\"\"] apiVersions: [\"v1\"] resources: [\"pods\"] operations: - CREATE - UPDATE mutating: falseEOF``` This produces the following output: ``` clusteradmissionpolicy.policies.kubewarden.io/privileged-pods created``` When a ``` kubectl get clusteradmissionpolicy.policies.kubewarden.io/privileged-pods``` You should see the following output: ``` NAME POLICY SERVER MUTATING STATUSprivileged-pods default false pending``` Once the new policy is ready to be served, the kubewarden-controller will register a ValidatingWebhookConfiguration object. The ``` kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -l kubewarden``` You should see the following output: ``` NAME WEBHOOKS AGEclusterwide-privileged-pods 1 9s``` Once the First, let's create a Pod with a Container not in privileged mode: ``` kubectl apply -f - <<EOFapiVersion: v1kind: Podmetadata: name: unprivileged-podspec: containers: - name: nginx image: nginx:latestEOF``` This produces the following output: ``` pod/unprivileged-pod created``` The Pod is successfully created. Now, let's create a Pod with at least one Container privileged flag: ``` kubectl apply -f - <<EOFapiVersion: v1kind: Podmetadata: name: privileged-podspec: containers: - name: nginx image: nginx:latest securityContext: privileged: trueEOF``` The creation of the Pod has been denied by the policy and you should see the following message: ``` Error from server: error when creating \"STDIN\": admission webhook \"clusterwide-privileged-pods.kubewarden.admission\" denied the request: Privileged container is not allowed``` Both examples didn't define a namespace, which means the default namespace was the target. However, as you could see in the second example, the policy is still applied. As stated above, this is due to the scope being cluster-wide and not targeting a specific namespace. You can remove the resources created by uninstalling the helm charts as follows: ``` helm uninstall --namespace kubewarden kubewarden-defaults``` ``` helm uninstall --namespace kubewarden kubewarden-controller``` ``` helm uninstall --namespace kubewarden kubewarden-crds``` Once the helm charts have been uninstalled, remove the Kubernetes namespace that was used to deploy the Kubewarden stack: ``` kubectl delete namespace kubewarden``` Kubewarden contains a helm pre-delete hook that removes all PolicyServers and kubewarden-controllers. Then the kubewarden-controller will delete all resources, so it's important that kubewarden-controller is running when helm uninstall is executed. ``` kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -l \"kubewarden\"``` ``` kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io -l \"kubewarden\"``` If these resources are not automatically removed, remove them manually by using the following command: ``` kubectl delete -l \"kubewarden\" validatingwebhookconfigurations.admissionregistration.k8s.io``` ``` kubectl delete -l \"kubewarden\" mutatingwebhookconfigurations.admissionregistration.k8s.io``` Now, you are ready to deploy Kubewarden! Have a look at the policies on artifacthub.io, on GitHub, or reuse existing Rego policies as shown in the following chapters." } ]
{ "category": "Provisioning", "file_name": "#quick-start.md", "project_name": "Kyverno", "subcategory": "Security & Compliance" }
[ { "data": "Kyverno (Greek for govern) is a policy engine designed specifically for Kubernetes. Some of its many features include: Kyverno allows cluster administrators to manage environment specific configurations independently of workload configurations and enforce configuration best practices for their clusters. Kyverno can be used to scan existing workloads for best practices, or can be used to enforce best practices by blocking or mutating API requests. Kyverno runs as a dynamic admission controller in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to return results that enforce admission policies or reject requests. Kyverno policies can match resources using the resource kind, name, label selectors, and much more. Mutating policies can be written as overlays (similar to Kustomize) or as a RFC 6902 JSON Patch. Validating policies also use an overlay style syntax, with support for pattern matching and conditional (if-then-else) processing. Policy enforcement is captured using Kubernetes events. For requests that are either allowed or existed prior to introduction of a Kyverno policy, Kyverno creates Policy Reports in the cluster which contain a running list of resources matched by a policy, their status, and more. The diagram below shows the high-level logical architecture of Kyverno. The Webhook is the server which handles incoming AdmissionReview requests from the Kubernetes API server and sends them to the Engine for processing. It is dynamically configured by the Webhook Controller which watches the installed policies and modifies the webhooks to request only the resources matched by those policies. The Cert Renewer is responsible for watching and renewing the certificates, stored as Kubernetes Secrets, needed by the webhook. The Background Controller handles all generate and mutate-existing policies by reconciling UpdateRequests, an intermediary resource. And the Report Controllers handle creation and reconciliation of Policy Reports from their intermediary resources, Admission Reports and Background Scan Reports. Kyverno also supports high availability. A highly-available installation of Kyverno is one in which the controllers selected for installation are configured to run with multiple replicas. Depending on the controller, the additional replicas may also serve the purpose of increasing the scalability of Kyverno. See the high availability page for more details on the various Kyverno controllers, their components, and how availability is handled in each one. This section is intended to provide you with some quick guides on how to get Kyverno up and running and demonstrate a few of Kyvernos seminal features. There are quick start guides which focus on validation, mutation, as well as generation allowing you to select the one (or all) which is most relevant to your use case. These guides are intended for proof-of-concept or lab demonstrations only and not recommended as a guide for production. Please see the installation page for more complete information on how to install Kyverno in" }, { "data": "First, install Kyverno from the latest release manifest. ``` 1kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.12.0/install.yaml ``` Next, select the quick start guide in which you are interested. Alternatively, start at the top and work your way down. In the validation guide, you will see how simple an example Kyverno policy can be which ensures a label called team is present on every Pod. Validation is the most common use case for policy and functions as a yes or no decision making process. Resources which are compliant with the policy are allowed to pass (yes, this is allowed) and those which are not compliant may not be allowed to pass (no, this is not allowed). An additional effect of these validate policies is to produce Policy Reports. A Policy Report is a custom Kubernetes resource, produced and managed by Kyverno, which shows the results of policy decisions upon allowed resources in a user-friendly way. Add the policy below to your cluster. It contains a single validation rule that requires that all Pods have the team label. Kyverno supports different rule types to validate, mutate, generate, cleanup, and verify image configurations. The field validationFailureAction is set to Enforce to block Pods that are non-compliant. Using the default value Audit will report violations but not block requests. ``` 1kubectl create -f- << EOF 2apiVersion: kyverno.io/v1 3kind: ClusterPolicy 4metadata: 5 name: require-labels 6spec: 7 validationFailureAction: Enforce 8 rules: 9 - name: check-team 10 match: 11 any: 12 - resources: 13 kinds: 14 - Pod 15 validate: 16 message: \"label 'team' is required\" 17 pattern: 18 metadata: 19 labels: 20 team: \"?*\" 21EOF ``` Try creating a Deployment without the required label. ``` 1kubectl create deployment nginx --image=nginx ``` You should see an error. ``` 1error: failed to create deployment: admission webhook \"validate.kyverno.svc-fail\" denied the request: 2 3resource Deployment/default/nginx was blocked due to the following policies: 4 5require-labels: 6 autogen-check-team: 'validation error: label ''team'' is 7 required. Rule autogen-check-team failed at path /spec/template/metadata/labels/team/' ``` In addition to the error returned, Kyverno also produces an Event in the same Namespace which contains this information. Note that how although the policy matches on Pods, Kyverno blocked the Deployment you just created. This is because Kyverno intelligently applies policies written exclusively for Pods, using its rule auto-generation feature, to all standard Kubernetes Pod controllers including the Deployment above. Now, create a Pod with the required label. ``` 1kubectl run nginx --image nginx --labels team=backend ``` This Pod configuration is compliant with the policy and is allowed. Now that the Pod exists, wait just a few seconds longer and see what other action Kyverno took. Run the following command to retrieve the Policy Report that Kyverno just created. ``` 1kubectl get policyreport -o wide ``` Notice that there is a single Policy Report with just one result listed under the PASS" }, { "data": "This result is due to the Pod we just created having passed the policy. ``` 1NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE 289044d72-8a1e-4af0-877b-9be727dc3ec4 Pod nginx 1 0 0 0 0 15s ``` If you were to describe the above policy report you would see more information about the policy and resource. ``` 1Results: 2 Message: validation rule 'check-team' passed. 3 Policy: require-labels 4 Resources: 5 API Version: v1 6 Kind: Pod 7 Name: nginx 8 Namespace: default 9 UID: 07d04dc0-fbb4-479a-b049-a3d63342b354 10 Result: pass 11 Rule: check-team 12 Scored: true 13 Source: kyverno 14 Timestamp: 15 Nanos: 0 16 Seconds: 1683759146 ``` Policy reports are helpful in that they are both user- and tool-friendly, based upon an open standard, and separated from the policies which produced them. This separation has the benefit of report access being easy to grant and manage for other users who may not need or have access to Kyverno policies. Now that youve experienced validate policies and seen a bit about policy reports, clean up by deleting the policy you created above. ``` 1kubectl delete clusterpolicy require-labels ``` Congratulations, youve just implemented a validation policy in your Kubernetes cluster! For more details on validation policies, see the validate section. Mutation is the ability to change or mutate a resource in some way prior to it being admitted into the cluster. A mutate rule is similar to a validate rule in that it selects some type of resource (like Pods or ConfigMaps) and defines what the desired state should look like. Add this Kyverno mutate policy to your cluster. This policy will add the label team to any new Pod and give it the value of bravo but only if a Pod does not already have this label assigned. Kyverno has the ability to perform basic if-then logical decisions in a very easy way making policies trivial to write and read. The +(team) notation uses a Kyverno anchor to define the behavior Kyverno should take if the label key is not found. ``` 1kubectl create -f- << EOF 2apiVersion: kyverno.io/v1 3kind: ClusterPolicy 4metadata: 5 name: add-labels 6spec: 7 rules: 8 - name: add-team 9 match: 10 any: 11 - resources: 12 kinds: 13 - Pod 14 mutate: 15 patchStrategicMerge: 16 metadata: 17 labels: 18 +(team): bravo 19EOF ``` Lets now create a new Pod which does not have the desired label defined. ``` 1kubectl run redis --image redis ``` Once the Pod has been created, get the Pod to see if the team label was added. ``` 1kubectl get pod redis --show-labels ``` You should see that the label team=bravo has been added by Kyverno. Try one more Pod, this time one which does already define the team label. ``` 1kubectl run newredis --image redis -l team=alpha ``` Get this Pod back and check once again for" }, { "data": "``` 1kubectl get pod newredis --show-labels ``` This time, you should see Kyverno did not add the team label with the value defined in the policy since one was already found on the Pod. Now that youve experienced mutate policies and seen how logic can be written easily, clean up by deleting the policy you created above. ``` 1kubectl delete clusterpolicy add-labels ``` Congratulations, youve just implemented a mutation policy in your Kubernetes cluster! For more details on mutate policies, see the mutate section. Kyverno has the ability to generate (i.e., create) a new Kubernetes resource based upon a definition stored in a policy. Like both validate and mutate rules, Kyverno generate rules use similar concepts and structures to express policy. The generation ability is both powerful and flexible with one of its most useful aspects being, in addition to the initial generation, it has the ability to continually synchronize the resources it has generated. Generate rules can be a powerful automation tool and can solve many common challenges faced by Kubernetes operators. Lets look at one such use case in this guide. We will use a Kyverno generate policy to generate an image pull secret in a new Namespace. First, create this Kubernetes Secret in your cluster which will simulate a real image pull secret. ``` 1kubectl -n default create secret docker-registry regcred \\ 2 --docker-server=myinternalreg.corp.com \\ 3 --docker-username=john.doe \\ 4 --docker-password=Passw0rd123! \\ 5 --docker-email=john.doe@corp.com ``` Next, create the following Kyverno policy. The sync-secrets policy will match on any newly-created Namespace and will clone the Secret we just created earlier into that new Namespace. ``` 1kubectl create -f- << EOF 2apiVersion: kyverno.io/v1 3kind: ClusterPolicy 4metadata: 5 name: sync-secrets 6spec: 7 rules: 8 - name: sync-image-pull-secret 9 match: 10 any: 11 - resources: 12 kinds: 13 - Namespace 14 generate: 15 apiVersion: v1 16 kind: Secret 17 name: regcred 18 namespace: \"{{request.object.metadata.name}}\" 19 synchronize: true 20 clone: 21 namespace: default 22 name: regcred 23EOF ``` Create a new Namespace to test the policy. ``` 1kubectl create ns mytestns ``` Get the Secrets in this new Namespace and see if regcred is present. ``` 1kubectl -n mytestns get secret ``` You should see that Kyverno has generated the regcred Secret using the source Secret from the default Namespace as the template. If you wish, you may also modify the source Secret and watch as Kyverno synchronizes those changes down to wherever it has generated it. With a basic understanding of generate policies, clean up by deleting the policy you created above. ``` 1kubectl delete clusterpolicy sync-secrets ``` Congratulations, youve just implemented a generation policy in your Kubernetes cluster! For more details on generate policies, see the generate section. An introduction to admission controllers in Kubernetes. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve. Created by" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "Kyverno", "subcategory": "Security & Compliance" }
[ { "data": "Learn about Kyverno and create your first policy through a Quick Start guide. Understand how to install and configure Kyverno. Get an overview of how Kyverno policies and rules work. Apply policies across clusters and delivery pipelines Create policies which can validate, mutate, generate, and clean up resources as well as perform verification of container images. Test Kyverno policies for effectiveness. View and audit Kyverno policy results with reports. Monitor and observe the operation of Kyverno using metrics. Using distributed tracing to introspect the internal operations of Kyverno. Security Processes and Guidelines Apply and test policies outside a cluster Custom Resource Definitions (CRDs) for Kyverno policies and other types. Processes for troubleshooting and recovery of Kyverno. Understand the various components of Kyverno and how it impacts high availability. Understand how and when Kyverno releases software. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve. Created by" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "Metarget", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Metarget", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Kyverno", "subcategory": "Security & Compliance" }
[ { "data": "Kyverno (Greek for govern) is a policy engine designed specifically for Kubernetes. Some of its many features include: Kyverno allows cluster administrators to manage environment specific configurations independently of workload configurations and enforce configuration best practices for their clusters. Kyverno can be used to scan existing workloads for best practices, or can be used to enforce best practices by blocking or mutating API requests. Kyverno runs as a dynamic admission controller in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to return results that enforce admission policies or reject requests. Kyverno policies can match resources using the resource kind, name, label selectors, and much more. Mutating policies can be written as overlays (similar to Kustomize) or as a RFC 6902 JSON Patch. Validating policies also use an overlay style syntax, with support for pattern matching and conditional (if-then-else) processing. Policy enforcement is captured using Kubernetes events. For requests that are either allowed or existed prior to introduction of a Kyverno policy, Kyverno creates Policy Reports in the cluster which contain a running list of resources matched by a policy, their status, and more. The diagram below shows the high-level logical architecture of Kyverno. The Webhook is the server which handles incoming AdmissionReview requests from the Kubernetes API server and sends them to the Engine for processing. It is dynamically configured by the Webhook Controller which watches the installed policies and modifies the webhooks to request only the resources matched by those policies. The Cert Renewer is responsible for watching and renewing the certificates, stored as Kubernetes Secrets, needed by the webhook. The Background Controller handles all generate and mutate-existing policies by reconciling UpdateRequests, an intermediary resource. And the Report Controllers handle creation and reconciliation of Policy Reports from their intermediary resources, Admission Reports and Background Scan Reports. Kyverno also supports high availability. A highly-available installation of Kyverno is one in which the controllers selected for installation are configured to run with multiple replicas. Depending on the controller, the additional replicas may also serve the purpose of increasing the scalability of Kyverno. See the high availability page for more details on the various Kyverno controllers, their components, and how availability is handled in each one. This section is intended to provide you with some quick guides on how to get Kyverno up and running and demonstrate a few of Kyvernos seminal features. There are quick start guides which focus on validation, mutation, as well as generation allowing you to select the one (or all) which is most relevant to your use case. These guides are intended for proof-of-concept or lab demonstrations only and not recommended as a guide for production. Please see the installation page for more complete information on how to install Kyverno in" }, { "data": "First, install Kyverno from the latest release manifest. ``` 1kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.12.0/install.yaml ``` Next, select the quick start guide in which you are interested. Alternatively, start at the top and work your way down. In the validation guide, you will see how simple an example Kyverno policy can be which ensures a label called team is present on every Pod. Validation is the most common use case for policy and functions as a yes or no decision making process. Resources which are compliant with the policy are allowed to pass (yes, this is allowed) and those which are not compliant may not be allowed to pass (no, this is not allowed). An additional effect of these validate policies is to produce Policy Reports. A Policy Report is a custom Kubernetes resource, produced and managed by Kyverno, which shows the results of policy decisions upon allowed resources in a user-friendly way. Add the policy below to your cluster. It contains a single validation rule that requires that all Pods have the team label. Kyverno supports different rule types to validate, mutate, generate, cleanup, and verify image configurations. The field validationFailureAction is set to Enforce to block Pods that are non-compliant. Using the default value Audit will report violations but not block requests. ``` 1kubectl create -f- << EOF 2apiVersion: kyverno.io/v1 3kind: ClusterPolicy 4metadata: 5 name: require-labels 6spec: 7 validationFailureAction: Enforce 8 rules: 9 - name: check-team 10 match: 11 any: 12 - resources: 13 kinds: 14 - Pod 15 validate: 16 message: \"label 'team' is required\" 17 pattern: 18 metadata: 19 labels: 20 team: \"?*\" 21EOF ``` Try creating a Deployment without the required label. ``` 1kubectl create deployment nginx --image=nginx ``` You should see an error. ``` 1error: failed to create deployment: admission webhook \"validate.kyverno.svc-fail\" denied the request: 2 3resource Deployment/default/nginx was blocked due to the following policies: 4 5require-labels: 6 autogen-check-team: 'validation error: label ''team'' is 7 required. Rule autogen-check-team failed at path /spec/template/metadata/labels/team/' ``` In addition to the error returned, Kyverno also produces an Event in the same Namespace which contains this information. Note that how although the policy matches on Pods, Kyverno blocked the Deployment you just created. This is because Kyverno intelligently applies policies written exclusively for Pods, using its rule auto-generation feature, to all standard Kubernetes Pod controllers including the Deployment above. Now, create a Pod with the required label. ``` 1kubectl run nginx --image nginx --labels team=backend ``` This Pod configuration is compliant with the policy and is allowed. Now that the Pod exists, wait just a few seconds longer and see what other action Kyverno took. Run the following command to retrieve the Policy Report that Kyverno just created. ``` 1kubectl get policyreport -o wide ``` Notice that there is a single Policy Report with just one result listed under the PASS" }, { "data": "This result is due to the Pod we just created having passed the policy. ``` 1NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE 289044d72-8a1e-4af0-877b-9be727dc3ec4 Pod nginx 1 0 0 0 0 15s ``` If you were to describe the above policy report you would see more information about the policy and resource. ``` 1Results: 2 Message: validation rule 'check-team' passed. 3 Policy: require-labels 4 Resources: 5 API Version: v1 6 Kind: Pod 7 Name: nginx 8 Namespace: default 9 UID: 07d04dc0-fbb4-479a-b049-a3d63342b354 10 Result: pass 11 Rule: check-team 12 Scored: true 13 Source: kyverno 14 Timestamp: 15 Nanos: 0 16 Seconds: 1683759146 ``` Policy reports are helpful in that they are both user- and tool-friendly, based upon an open standard, and separated from the policies which produced them. This separation has the benefit of report access being easy to grant and manage for other users who may not need or have access to Kyverno policies. Now that youve experienced validate policies and seen a bit about policy reports, clean up by deleting the policy you created above. ``` 1kubectl delete clusterpolicy require-labels ``` Congratulations, youve just implemented a validation policy in your Kubernetes cluster! For more details on validation policies, see the validate section. Mutation is the ability to change or mutate a resource in some way prior to it being admitted into the cluster. A mutate rule is similar to a validate rule in that it selects some type of resource (like Pods or ConfigMaps) and defines what the desired state should look like. Add this Kyverno mutate policy to your cluster. This policy will add the label team to any new Pod and give it the value of bravo but only if a Pod does not already have this label assigned. Kyverno has the ability to perform basic if-then logical decisions in a very easy way making policies trivial to write and read. The +(team) notation uses a Kyverno anchor to define the behavior Kyverno should take if the label key is not found. ``` 1kubectl create -f- << EOF 2apiVersion: kyverno.io/v1 3kind: ClusterPolicy 4metadata: 5 name: add-labels 6spec: 7 rules: 8 - name: add-team 9 match: 10 any: 11 - resources: 12 kinds: 13 - Pod 14 mutate: 15 patchStrategicMerge: 16 metadata: 17 labels: 18 +(team): bravo 19EOF ``` Lets now create a new Pod which does not have the desired label defined. ``` 1kubectl run redis --image redis ``` Once the Pod has been created, get the Pod to see if the team label was added. ``` 1kubectl get pod redis --show-labels ``` You should see that the label team=bravo has been added by Kyverno. Try one more Pod, this time one which does already define the team label. ``` 1kubectl run newredis --image redis -l team=alpha ``` Get this Pod back and check once again for" }, { "data": "``` 1kubectl get pod newredis --show-labels ``` This time, you should see Kyverno did not add the team label with the value defined in the policy since one was already found on the Pod. Now that youve experienced mutate policies and seen how logic can be written easily, clean up by deleting the policy you created above. ``` 1kubectl delete clusterpolicy add-labels ``` Congratulations, youve just implemented a mutation policy in your Kubernetes cluster! For more details on mutate policies, see the mutate section. Kyverno has the ability to generate (i.e., create) a new Kubernetes resource based upon a definition stored in a policy. Like both validate and mutate rules, Kyverno generate rules use similar concepts and structures to express policy. The generation ability is both powerful and flexible with one of its most useful aspects being, in addition to the initial generation, it has the ability to continually synchronize the resources it has generated. Generate rules can be a powerful automation tool and can solve many common challenges faced by Kubernetes operators. Lets look at one such use case in this guide. We will use a Kyverno generate policy to generate an image pull secret in a new Namespace. First, create this Kubernetes Secret in your cluster which will simulate a real image pull secret. ``` 1kubectl -n default create secret docker-registry regcred \\ 2 --docker-server=myinternalreg.corp.com \\ 3 --docker-username=john.doe \\ 4 --docker-password=Passw0rd123! \\ 5 --docker-email=john.doe@corp.com ``` Next, create the following Kyverno policy. The sync-secrets policy will match on any newly-created Namespace and will clone the Secret we just created earlier into that new Namespace. ``` 1kubectl create -f- << EOF 2apiVersion: kyverno.io/v1 3kind: ClusterPolicy 4metadata: 5 name: sync-secrets 6spec: 7 rules: 8 - name: sync-image-pull-secret 9 match: 10 any: 11 - resources: 12 kinds: 13 - Namespace 14 generate: 15 apiVersion: v1 16 kind: Secret 17 name: regcred 18 namespace: \"{{request.object.metadata.name}}\" 19 synchronize: true 20 clone: 21 namespace: default 22 name: regcred 23EOF ``` Create a new Namespace to test the policy. ``` 1kubectl create ns mytestns ``` Get the Secrets in this new Namespace and see if regcred is present. ``` 1kubectl -n mytestns get secret ``` You should see that Kyverno has generated the regcred Secret using the source Secret from the default Namespace as the template. If you wish, you may also modify the source Secret and watch as Kyverno synchronizes those changes down to wherever it has generated it. With a basic understanding of generate policies, clean up by deleting the policy you created above. ``` 1kubectl delete clusterpolicy sync-secrets ``` Congratulations, youve just implemented a generation policy in your Kubernetes cluster! For more details on generate policies, see the generate section. An introduction to admission controllers in Kubernetes. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve. Created by" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Notary", "subcategory": "Security & Compliance" }
[ { "data": "Notation v1.1.0 is available Notation is now GA and considered ready for production environments. Before you begin, you need: Create and run an OCI-compatible registry on your development computer using the distribution/distribution with the image deletion enabled. The following command creates a registry that is accessible at localhost:5001. ``` docker run -d -p 5001:5000 -e REGISTRYSTORAGEDELETE_ENABLED=true --name registry registry ``` If you want to use Notation with other registries, refer to which registries are compatible with Notary for more alternatives. See Authenticate with OCI-compliant registries when you log in to another OCI registry. The following commands build and push the wabbit-networks/net-monitor container image to your container registry. ``` docker build -t localhost:5001/net-monitor:v1 https://github.com/wabbit-networks/net-monitor.git#main docker push localhost:5001/net-monitor:v1 ``` Get the digest value of the localhost:5001/net-monitor:v1 image using docker inspect. For example: ``` docker inspect --format='{{index .RepoDigests 0}}' localhost:5001/net-monitor:v1 ``` Output: ``` sha256:073b75987e95b89f187a89809f08a32033972bb63cda279db8a9ca16b7ff555a ``` In the above example, the digest value is sha256:073b75987e95b89f187a89809f08a32033972bb63cda279db8a9ca16b7ff555a. The reference to the container image using the digest value is localhost:5000/net-monitor@sha256:073b75987e95b89f187a89809f08a32033972bb63cda279db8a9ca16b7ff555a. Use notation ls to show any signatures associated with the container image you built and pushed in the previous section. ``` IMAGE=localhost:5001/net-monitor@sha256:073b75987e95b89f187a89809f08a32033972bb63cda279db8a9ca16b7ff555a notation ls $IMAGE ``` Confirm there are no signatures shown in the output. Use notation cert generate-test to generate a test RSA key for signing artifacts, and a self-signed X.509 test certificate for verifying artifacts. The following command generates a test key and a self-signed X.509 certificate. With the --default flag, the test key is set as a default signing key. The self-signed X.509 certificate is added to a named trust store wabbit-networks.io of type ca. ``` notation cert generate-test --default \"wabbit-networks.io\" ``` Use notation key ls to confirm the signing key is correctly configured. Key name with a * prefix is the default key. ``` notation key ls ``` Use notation cert ls to confirm the certificate is stored in the trust store. ``` notation cert ls ``` Use notation sign to sign the container image. ``` notation sign $IMAGE ``` By default, the signature format is" }, { "data": "Use --signature-format to use COSE signature format. ``` notation sign --signature-format cose $IMAGE ``` The generated signature is pushed to the registry and the digest of the container image returned. Use notation ls to show the signature associated with the container image. ``` notation ls $IMAGE ``` Confirm there is one signature, for example: ``` $ notation ls $IMAGE localhost:5001/net-monitor@sha256:073b75987e95b89f187a89809f08a32033972bb63cda279db8a9ca16b7ff555a application/vnd.cncf.notary.v2.signature sha256:ba3a68a28648ba18c51a479145fca60d96b43dc96c6ab22f412c89ac56a9038b ``` To verify the container image, configure the trust policy to specify trusted identities that sign the artifacts, and level of signature verification to use. For more details, see trust policy spec. Create a JSON file with the following trust policy, for example: ``` cat <<EOF > ./trustpolicy.json { \"version\": \"1.0\", \"trustPolicies\": [ { \"name\": \"wabbit-networks-images\", \"registryScopes\": [ \"*\" ], \"signatureVerification\": { \"level\" : \"strict\" }, \"trustStores\": [ \"ca:wabbit-networks.io\" ], \"trustedIdentities\": [ \"*\" ] } ] } EOF ``` Use notation policy import to import the trust policy configuration from a JSON file. For example: ``` notation policy import ./trustpolicy.json ``` Use notation policy show to view the applied policy configuration. For example: ``` notation policy show ``` The above JSON creates a trust policy named wabbit-networks-images. The policy has registryScopes set to *, which applies the policy to all the artifacts of any registry. The signatureVerification is set to strict, which checks all validations and any failure will fail the signature verification. This policy uses the wabbit-networks.io trust store of type ca which was created in the previous step. For more details on trust policies, see trust policy spec. To enable trust policy for specific repositories, set the registryScopes to those specific repositories. For example: ``` \"registryScopes\": [ \"localhost:5001/net-monitor\", \"localhost:5001/nginx\", \"localhost:5001/hello-world\" ] ``` Use notation verify to verify signatures associated with the container image. ``` notation verify $IMAGE ``` The digest of the supplied artifact is returned upon successful verification. To remove the sample registry running on your development computer: ``` docker rm -f registry ``` To reset your notation configuration, remove the notation configuration directory. For more details, see Remove the configuration files." } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Metarget", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "Metarget", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Open Policy Administration Layer (OPAL)", "subcategory": "Security & Compliance" }
[ { "data": "This tutorial will teach you how to run OPAL using the official docker images. | 0 | 1 | |:|:| | Use this tutorial if you | Understand what OPAL is for (main features, how it works).Want to run OPAL with a real configuration.Want a step-by-step guide for deploying in production. | | Use the other tutorial if you | Want to explore OPAL quickly.Get a working playground with one docker-compose command.Want to learn about OPAL core features and see what OPAL can do for you. | Use this tutorial if you Want to run OPAL with a real configuration. Use the other tutorial if you Want to explore OPAL quickly. Get a working playground with one docker-compose command. Want to learn about OPAL core features and see what OPAL can do for you." } ]
{ "category": "Provisioning", "file_name": "#running-opa.md", "project_name": "Open Policy Agent (OPA)", "subcategory": "Security & Compliance" }
[ { "data": "The Open Policy Agent (OPA, pronounced oh-pa) is an open source, general-purpose policy engine that unifies policy enforcement across the stack. OPA provides a high-level declarative language that lets you specify policy as code and simple APIs to offload policy decision-making from your software. You can use OPA to enforce policies in microservices, Kubernetes, CI/CD pipelines, API gateways, and more. OPA was originally created by Styra and is proud to be a graduated project in the Cloud Native Computing Foundation (CNCF) landscape. For details read the CNCF announcement. Read this page to learn about the core concepts in OPAs policy language (Rego) as well as how to download, run, and integrate OPA. OPA decouples policy decision-making from policy enforcement. When your software needs to make policy decisions it queries OPA and supplies structured data (e.g., JSON) as input. OPA accepts arbitrary structured data as input. OPA generates policy decisions by evaluating the query input against policies and data. OPA and Rego are domain-agnostic so you can describe almost any kind of invariant in your policies. For example: Policy decisions are not limited to simple yes/no or allow/deny answers. Like query inputs, your policies can generate arbitrary structured data as output. Lets look at an example. Imagine you work for an organization with the following system: There are three kinds of components in the system: All the servers, networks, and ports are provisioned by a script. The script receives a JSON representation of the system as input: ``` { \"servers\": [ {\"id\": \"app\", \"protocols\": [\"https\", \"ssh\"], \"ports\": [\"p1\", \"p2\", \"p3\"]}, {\"id\": \"db\", \"protocols\": [\"mysql\"], \"ports\": [\"p3\"]}, {\"id\": \"cache\", \"protocols\": [\"memcache\"], \"ports\": [\"p3\"]}, {\"id\": \"ci\", \"protocols\": [\"http\"], \"ports\": [\"p1\", \"p2\"]}, {\"id\": \"busybox\", \"protocols\": [\"telnet\"], \"ports\": [\"p1\"]} ], \"networks\": [ {\"id\": \"net1\", \"public\": false}, {\"id\": \"net2\", \"public\": false}, {\"id\": \"net3\", \"public\": true}, {\"id\": \"net4\", \"public\": true} ], \"ports\": [ {\"id\": \"p1\", \"network\": \"net1\"}, {\"id\": \"p2\", \"network\": \"net3\"}, {\"id\": \"p3\", \"network\": \"net2\"} ] }``` Earlier in the day your boss told you about a new security policy that has to be implemented: ``` Servers reachable from the Internet must not expose the insecure 'http' protocol. Servers are not allowed to expose the 'telnet' protocol. ``` The policy needs to be enforced when servers, networks, and ports are provisioned and the compliance team wants to periodically audit the system to find servers that violate the policy. Your boss has asked you to determine if OPA would be a good fit for implementing the policy. OPA policies are expressed in a high-level declarative language called Rego. Rego (pronounced ray-go) is purpose-built for expressing policies over complex hierarchical data structures. For detailed information on Rego see the Policy Language documentation. The examples below are interactive! If you edit the input data above containing servers, networks, and ports, the output will change below. Similarly, if you edit the queries or rules in the examples below the output will change. As you read through this section, try changing the input, queries, and rules and observe the difference in output. They can also be run locally on your machine using the opa eval command, here are setup instructions. Note that the examples in this section try to represent the best practices. As such, they make use of keywords that are meant to become standard keywords at some point in time, but have been introduced" }, { "data": "See the docs on future keywords for more information. ``` package example import rego.v1``` When OPA evaluates policies it binds data provided in the query to a global variable called input. You can refer to data in the input using the . (dot) operator. ``` input.servers``` ``` [ { \"id\": \"app\", \"ports\": [ \"p1\", \"p2\", \"p3\" ], \"protocols\": [ \"https\", \"ssh\" ] }, { \"id\": \"db\", \"ports\": [ \"p3\" ], \"protocols\": [ \"mysql\" ] }, { \"id\": \"cache\", \"ports\": [ \"p3\" ], \"protocols\": [ \"memcache\" ] }, { \"id\": \"ci\", \"ports\": [ \"p1\", \"p2\" ], \"protocols\": [ \"http\" ] }, { \"id\": \"busybox\", \"ports\": [ \"p1\" ], \"protocols\": [ \"telnet\" ] } ]``` To refer to array elements you can use the familiar square-bracket syntax: ``` input.servers[0].protocols[0]``` ``` \"https\"``` You can use the same square bracket syntax if keys contain other than [a-zA-Z0-9_]. E.g., input[\"foo~bar\"]. If you refer to a value that does not exist, OPA returns undefined. Undefined means that OPA was not able to find any results. ``` input.deadbeef``` ``` undefined decision``` ``` package example import rego.v1``` To produce policy decisions in Rego you write expressions against input and other data. ``` input.servers[0].id == \"app\"``` ``` true``` OPA includes a set of built-in functions you can use to perform common operations like string manipulation, regular expression matching, arithmetic, aggregation, and more. ``` count(input.servers[0].protocols) >= 1``` ``` true``` For a complete list of built-in functions supported in OPA out-of-the-box see the Policy Reference page. Multiple expressions are joined together with the ; (AND) operator. For queries to produce results, all of the expressions in the query must be true or defined. The order of expressions does not matter. ``` input.servers[0].id == \"app\"; input.servers[0].protocols[0] == \"https\"``` ``` true``` You can omit the ; (AND) operator by splitting expressions across multiple lines. The following query has the same meaning as the previous one: ``` input.servers[0].id == \"app\" input.servers[0].protocols[0] == \"https\"``` ``` true``` If any of the expressions in the query are not true (or defined) the result is undefined. In the example below, the second expression is false: ``` input.servers[0].id == \"app\" input.servers[0].protocols[0] == \"telnet\"``` ``` undefined decision``` ``` package example import rego.v1``` You can store values in intermediate variables using the := (assignment) operator. Variables can be referenced just like input. ``` s := input.servers[0] s.id == \"app\" p := s.protocols[0] p == \"https\"``` ``` ++-+ | p | s | ++-+ | \"https\" | {\"id\":\"app\",\"ports\":[\"p1\",\"p2\",\"p3\"],\"protocols\":[\"https\",\"ssh\"]} | ++-+``` When OPA evaluates expressions, it finds values for the variables that make all of the expressions true. If there are no variable assignments that make all of the expressions true, the result is undefined. ``` s := input.servers[0] s.id == \"app\" s.protocols[1] == \"telnet\"``` ``` undefined decision``` Variables are immutable. OPA reports an error if you try to assign the same variable twice. ``` s := input.servers[0] s := input.servers[1]``` ``` 1 error occurred: 2:1: regocompileerror: var s assigned above``` OPA must be able to enumerate the values for all variables in all expressions. If OPA cannot enumerate the values of a variable in any expression, OPA will report an error. ``` x := 1 x != y # y has not been assigned a value``` ``` 2 errors occurred: 2:1: regounsafevar_error: var y is unsafe 2:1: regounsafevarerror: var is unsafe``` ``` package example import rego.v1``` Like other declarative languages" }, { "data": "SQL), iteration in Rego happens implicitly when you inject variables into expressions. There are explicit iteration constructs to express FOR ALL and FOR SOME, see below. To understand how iteration works in Rego, imagine you need to check if any networks are public. Recall that the networks are supplied inside an array: ``` input.networks``` ``` [ { \"id\": \"net1\", \"public\": false }, { \"id\": \"net2\", \"public\": false }, { \"id\": \"net3\", \"public\": true }, { \"id\": \"net4\", \"public\": true } ]``` One option would be to test each network in the input: ``` input.networks[0].public == true``` ``` false``` ``` input.networks[1].public == true``` ``` false``` ``` input.networks[2].public == true``` ``` true``` This approach is problematic because there may be too many networks to list statically, or more importantly, the number of networks may not be known in advance. In Rego, the solution is to substitute the array index with a variable. ``` some i; input.networks[i].public == true``` ``` ++ | i | ++ | 2 | | 3 | ++``` Now the query asks for values of i that make the overall expression true. When you substitute variables in references, OPA automatically finds variable assignments that satisfy all of the expressions in the query. Just like intermediate variables, OPA returns the values of the variables. You can substitute as many variables as you want. For example, to find out if any servers expose the insecure \"http\" protocol you could write: ``` some i, j; input.servers[i].protocols[j] == \"http\"``` ``` +++ | i | j | +++ | 3 | 0 | +++``` If variables appear multiple times the assignments satisfy all of the expressions. For example, to find the ids of ports connected to public networks, you could write: ``` some i, j id := input.ports[i].id input.ports[i].network == input.networks[j].id input.networks[j].public``` ``` ++++ | i | id | j | ++++ | 1 | \"p2\" | 2 | ++++``` Providing good names for variables can be hard. If you only refer to the variable once, you can replace it with the special _ (wildcard variable) operator. Conceptually, each instance of _ is a unique variable. ``` input.servers[].protocols[] == \"http\"``` ``` true``` Just like references that refer to non-existent fields or expressions that fail to match, if OPA is unable to find any variable assignments that satisfy all of the expressions, the result is undefined. ``` some i; input.servers[i].protocols[i] == \"ssh\" # there is no assignment of i that satisfies the expression``` ``` undefined decision``` While plain iteration serves as a powerful building block, Rego also features ways to express FOR SOME and FOR ALL more explicitly. To ensure backwards-compatibility, the keywords discussed below introduced slowly. In the first stage, users can opt-in to using the new keywords via a special import: import rego.v1 or, alternatively, import future.keywords.every introduces the every keyword described here. (Importing every means also importing in without an extra import statement.) At some point in the future, the keyword will become standard, and the import will become a no-op that can safely be removed. This should give all users ample time to update their policies, so that the new keyword will not cause clashes with existing variable names. See the docs on future keywords for more information. some ... in ... is used to iterate over the collection (its last argument), and will bind its variables (key, value position) to the collection" }, { "data": "It introduces new bindings to the evaluation of the rest of the rule body. Using some, we can express the rules introduced above in different ways: ``` public_network contains net.id if { some net in input.networks # some network exists and.. net.public # it is public. } shell_accessible contains server.id if { some server in input.servers \"telnet\" in server.protocols } shell_accessible contains server.id if { some server in input.servers \"ssh\" in server.protocols }``` ``` shell_accessible``` ``` [ \"app\", \"busybox\" ]``` For details on some ... in ..., see the documentation of the in operator. Expanding on the examples above, every allows us to succinctly express that a condition holds for all elements of a domain. ``` notelnetexposed if { every server in input.servers { every protocol in server.protocols { \"telnet\" != protocol } } } notelnetexposed_alt if { # alternative: every + not-in every server in input.servers { not \"telnet\" in server.protocols } } notelnetexposed_alt2 if { # alternative: not + rule + some not anytelnetexposed } anytelnetexposed if { some server in input.servers \"telnet\" in server.protocols }``` ``` { \"servers\": [ { \"id\": \"busybox\", \"protocols\": [\"http\", \"ftp\"] }, { \"id\": \"db\", \"protocols\": [\"mysql\", \"ssh\"] }, { \"id\": \"web\", \"protocols\": [\"https\"] } ] }``` ``` notelnetexposed``` ``` true``` For all the details, see Every Keyword. Rego lets you encapsulate and re-use logic with rules. Rules are just if-then logic statements. Rules can either be complete or partial. ``` package example.rules import rego.v1``` Complete rules are if-then statements that assign a single value to a variable. For example: ``` anypublicnetworks := true if { some net in input.networks # some network exists and.. net.public # it is public. }``` Every rule consists of a head and a body. In Rego we say the rule head is true if the rule body is true for some set of variable assignments. In the example above anypublicnetworks := true is the head and some net in input.networks; net.public is the body. You can query for the value generated by rules just like any other value: ``` anypublicnetworks``` ``` true``` All values generated by rules can be queried via the global data variable. ``` data.example.rules.anypublicnetworks``` ``` true``` You can query the value of any rule loaded into OPA by referring to it with an absolute path. The path of a rule is always: data.<package-path>.<rule-name>. If you omit the = <value> part of the rule head the value defaults to true. You could rewrite the example above as follows without changing the meaning: ``` anypublicnetworks if { some net in input.networks net.public }``` To define constants, omit the rule body. When you omit the rule body it defaults to true. Since the rule body is true, the rule head is always true/defined. ``` package example.constants pi := 3.14``` Constants defined like this can be queried just like any other values: ``` pi > 3``` ``` true``` If OPA cannot find variable assignments that satisfy the rule body, we say that the rule is undefined. For example, if the input provided to OPA does not include a public network then anypublicnetworks will be undefined (which is not the same as false.) Below, OPA is given a different set of input networks (none of which are public): ``` { \"networks\": [ {\"id\": \"n1\", \"public\": false}, {\"id\": \"n2\", \"public\": false} ] }``` ``` anypublicnetworks``` ``` undefined decision``` ``` package example import" }, { "data": "Partial rules are if-then statements that generate a set of values and assign that set to a variable. For example: ``` public_network contains net.id if { some net in input.networks # some network exists and.. net.public # it is public. }``` In the example above public_network contains net.id if is the rule head and some net in input.networks; net.public is the rule body. You can query for the entire set of values just like any other value: ``` public_network``` ``` [ \"net3\", \"net4\" ]``` Iteration over the set of values can be done with the some ... in ... expression: ``` some net in public_network``` ``` +--+ | net | +--+ | \"net3\" | | \"net4\" | +--+``` With a literal, or a bound variable, you can check if the value exists in the set via ... in ...: ``` \"net3\" in public_network``` ``` true``` You can also iterate over the set of values by referencing the set elements with a variable: ``` some n; public_network[n]``` ``` +--+-+ | n | public_network[n] | +--+-+ | \"net3\" | \"net3\" | | \"net4\" | \"net4\" | +--+-+``` Lastly, you can check if a value exists in the set using the same syntax: ``` public_network[\"net3\"]``` ``` \"net3\"``` In addition to partially defining sets, You can also partially define key/value pairs (aka objects). See Rules in the language guide for more information. When you join multiple expressions together in a query you are expressing logical AND. To express logical OR in Rego you define multiple rules with the same name. Lets look at an example. Imagine you wanted to know if any servers expose protocols that give clients shell access. To determine this you could define a complete rule that declares shell_accessible to be true if any servers expose the \"telnet\" or \"ssh\" protocols: ``` package example.logical_or import rego.v1 default shell_accessible := false shell_accessible if { input.servers[].protocols[] == \"telnet\" } shell_accessible if { input.servers[].protocols[] == \"ssh\" }``` ``` { \"servers\": [ { \"id\": \"busybox\", \"protocols\": [\"http\", \"telnet\"] }, { \"id\": \"web\", \"protocols\": [\"https\"] } ] }``` ``` shell_accessible``` ``` true``` The default keyword tells OPA to assign a value to the variable if all of the other rules with the same name are undefined. When you use logical OR with partial rules, each rule definition contributes to the set of values assigned to the variable. For example, the example above could be modified to generate a set of servers that expose \"telnet\" or \"ssh\". ``` package example.logical_or import rego.v1 shell_accessible contains server.id if { server := input.servers[_] server.protocols[_] == \"telnet\" } shell_accessible contains server.id if { server := input.servers[_] server.protocols[_] == \"ssh\" }``` ``` { \"servers\": [ { \"id\": \"busybox\", \"protocols\": [\"http\", \"telnet\"] }, { \"id\": \"db\", \"protocols\": [\"mysql\", \"ssh\"] }, { \"id\": \"web\", \"protocols\": [\"https\"] } ] }``` ``` shell_accessible``` ``` [ \"busybox\", \"db\" ]``` The sections above explain the core concepts in Rego. To put it all together lets review the desired policy (in English): ``` Servers reachable from the Internet must not expose the insecure 'http' protocol. Servers are not allowed to expose the 'telnet' protocol. ``` At a high-level the policy needs to identify servers that violate some conditions. To implement this policy we could define rules called violation that generate a set of servers that are in violation. For example: ``` package example import rego.v1 allow if { # allow is true" }, { "data": "count(violation) == 0 # there are zero violations. } violation contains server.id if { # a server is in the violation set if... some server in publicservers # it exists in the 'publicservers' set and... \"http\" in server.protocols # it contains the insecure \"http\" protocol. } violation contains server.id if { # a server is in the violation set if... some server in input.servers # it exists in the input.servers collection and... \"telnet\" in server.protocols # it contains the \"telnet\" protocol. } publicservers contains server if { # a server exists in the publicservers set if... some server in input.servers # it exists in the input.servers collection and... some port in server.ports # it references a port in the input.ports collection and... some input_port in input.ports port == input_port.id some input_network in input.networks # the port references a network in the input.networks collection and... inputport.network == inputnetwork.id input_network.public # the network is public. }``` ``` some x; violation[x]``` ``` +--+--+ | x | violation[x] | +--+--+ | \"busybox\" | \"busybox\" | | \"ci\" | \"ci\" | +--+--+``` This section explains how you can query OPA directly and interact with it on your own machine. To get started download an OPA binary for your platform from GitHub releases: On macOS (ARM 64-bit): ``` curl -L -o opa https://openpolicyagent.org/downloads/v0.65.0/opadarwinarm64_static ``` Or using Homebrew: ``` brew install opa ``` On Linux (64-bit): ``` curl -L -o opa https://openpolicyagent.org/downloads/v0.65.0/opalinuxamd64_static ``` Set permissions on the OPA executable: ``` chmod 755 ./opa ``` Checksums for all binaries are available in the download path by appending .sha256 to the binary filename. Verify the macOS binary checksum: ``` curl -L -o opadarwinamd64 https://openpolicyagent.org/downloads/v0.65.0/opadarwinamd64 curl -L -o opadarwinamd64.sha256 https://openpolicyagent.org/downloads/v0.65.0/opadarwinamd64.sha256 shasum -c opadarwinamd64.sha256 ``` The simplest way to interact with OPA is via the command-line using the opa eval sub-command. It is a swiss-army knife that you can use to evaluate arbitrary Rego expressions and policies. opa eval supports a large number of options for controlling evaluation. Commonly used flags include: | Flag | Short | Description | |:|:--|:--| | --bundle | -b | Load a bundle file or directory into OPA. This flag can be repeated. | | --data | -d | Load policy or data files into OPA. This flag can be repeated. | | --input | -i | Load a data file and use it as input. This flag cannot be repeated. | | --format | -f | Set the output format to use. The default is json and is intended for programmatic use. The pretty format emits more human-readable output. | | --fail | nan | Exit with a non-zero exit code if the query is undefined. | | --fail-defined | nan | Exit with a non-zero exit code if the query is not undefined. | For example: input.json: ``` { \"servers\": [ {\"id\": \"app\", \"protocols\": [\"https\", \"ssh\"], \"ports\": [\"p1\", \"p2\", \"p3\"]}, {\"id\": \"db\", \"protocols\": [\"mysql\"], \"ports\": [\"p3\"]}, {\"id\": \"cache\", \"protocols\": [\"memcache\"], \"ports\": [\"p3\"]}, {\"id\": \"ci\", \"protocols\": [\"http\"], \"ports\": [\"p1\", \"p2\"]}, {\"id\": \"busybox\", \"protocols\": [\"telnet\"], \"ports\": [\"p1\"]} ], \"networks\": [ {\"id\": \"net1\", \"public\": false}, {\"id\": \"net2\", \"public\": false}, {\"id\": \"net3\", \"public\": true}, {\"id\": \"net4\", \"public\": true} ], \"ports\": [ {\"id\": \"p1\", \"network\": \"net1\"}, {\"id\": \"p2\", \"network\": \"net3\"}, {\"id\": \"p3\", \"network\": \"net2\"} ] }``` example.rego: ``` package example import rego.v1 default allow := false # unless otherwise defined, allow is false allow if { # allow is true" }, { "data": "count(violation) == 0 # there are zero violations. } violation contains server.id if { # a server is in the violation set if... some server publicservers[server] # it exists in the 'publicservers' set and... server.protocols[_] == \"http\" # it contains the insecure \"http\" protocol. } violation contains server.id if { # a server is in the violation set if... server := input.servers[_] # it exists in the input.servers collection and... server.protocols[_] == \"telnet\" # it contains the \"telnet\" protocol. } publicservers contains server if { # a server exists in the 'publicservers' set if... some i, j server := input.servers[_] # it exists in the input.servers collection and... server.ports[_] == input.ports[i].id # it references a port in the input.ports collection and... input.ports[i].network == input.networks[j].id # the port references a network in the input.networks collection and... input.networks[j].public # the network is public. }``` ``` ./opa eval \"1*2+3\" ./opa eval -i input.json -d example.rego \"data.example.violation[x]\" ./opa eval --fail-defined -i input.json -d example.rego \"data.example.violation[x]\" echo $? ``` OPA includes an interactive shell or REPL (Read-Eval-Print-Loop) accessible via the opa run sub-command. You can use the REPL to experiment with policies and prototype new ones. To start the REPL just: ``` ./opa run ``` When you enter statements in the REPL, OPA evaluates them and prints the result. ``` true true 3.14 3.14 [\"hello\", \"world\"] [ \"hello\", \"world\" ] ``` Most REPLs let you define variables that you can reference later on. OPA allows you to do something similar. For example, you can define a pi constant as follows: ``` pi := 3.14 ``` Once pi is defined, you query for the value and write expressions in terms of it: ``` pi 3.14 pi > 3 true ``` Quit out of the REPL by pressing Control-D or typing exit: ``` exit ``` You can load policy and data files into the REPL by passing them on the command line. By default, JSON and YAML files are rooted under data. ``` opa run input.json ``` Run a few queries to poke around the data: ``` data.servers[0].protocols[1] ``` ``` data.servers[i].protocols[j] ``` ``` net := data.networks[_]; net.public ``` To set a data file as the input document in the REPL prefix the file path: ``` opa run example.rego repl.input:input.json ``` ``` data.example.public_servers[s] ``` Quit out of the REPL by pressing Control-D or typing exit: ``` exit ``` To integrate with OPA you can run it as a server and execute queries over HTTP. You can start OPA as a server with -s or --server: ``` ./opa run --server ./example.rego ``` By default OPA listens for HTTP connections on 0.0.0.0:8181. See opa run --help for a list of options to change the listening address, enable TLS, and more. For example, if the --v1-compatible flag is set, OPA will listen for HTTP connections on localhost:8181 by default. Inside of another terminal use curl (or a similar tool) to access OPAs HTTP API. When you query the /v1/data HTTP API you must wrap input data inside of a JSON object: ``` { \"input\": <value> } ``` Create a copy the input file for sending via curl: ``` cat <<EOF > v1-data-input.json { \"input\": $(cat input.json) } EOF ``` Execute a few curl requests and inspect the output: ``` curl localhost:8181/v1/data/example/violation -d @v1-data-input.json -H 'Content-Type: application/json' curl localhost:8181/v1/data/example/allow -d @v1-data-input.json -H 'Content-Type: application/json' ``` By default" }, { "data": "is used to serve policy queries without a path. When you execute queries without providing a path, you do not have to wrap the input. If the data.system.main decision is undefined it is treated as an error: ``` curl localhost:8181 -i -d @input.json -H 'Content-Type: application/json' ``` You can restart OPA and configure to use any decision as the default decision: ``` ./opa run --server --set=default_decision=example/allow ./example.rego ``` Re-run the last curl command from above: ``` curl localhost:8181 -i -d @input.json -H 'Content-Type: application/json' ``` OPA can be embedded inside Go programs as a library. The simplest way to embed OPA as a library is to import the github.com/open-policy-agent/opa/rego package. ``` import \"github.com/open-policy-agent/opa/rego\" ``` Call the rego.New function to create an object that can be prepared or evaluated: ``` r := rego.New( rego.Query(\"x = data.example.allow\"), rego.Load([]string{\"./example.rego\"}, nil)) ``` The rego.Rego supports several options that let you customize evaluation. See the GoDoc page for details. After constructing a new rego.Rego object you can call PrepareForEval() to obtain an executable query. If PrepareForEval() fails it indicates one of the options passed to the rego.New() call was invalid (e.g., parse error, compile error, etc.) ``` ctx := context.Background() query, err := r.PrepareForEval(ctx) if err != nil { // handle error } ``` The prepared query object can be cached in-memory, shared across multiple goroutines, and invoked repeatedly with different inputs. Call Eval() to execute the prepared query. ``` bs, err := ioutil.ReadFile(\"./input.json\") if err != nil { // handle error } var input interface{} if err := json.Unmarshal(bs, &input); err != nil { // handle error } rs, err := query.Eval(ctx, rego.EvalInput(input)) if err != nil { // handle error } ``` The policy decision is contained in the results returned by the Eval() call. You can inspect the decision and handle it accordingly: ``` // In this example we expect a single result (stored in the variable 'x'). fmt.Println(\"Result:\", rs[0].Bindings[\"x\"]) ``` You can combine the steps above into a simple command-line program that evaluates policies and outputs the result: main.go: ``` package main import ( \"context\" \"encoding/json\" \"fmt\" \"log\" \"os\" \"github.com/open-policy-agent/opa/rego\" ) func main() { ctx := context.Background() // Construct a Rego object that can be prepared or evaluated. r := rego.New( rego.Query(os.Args[2]), rego.Load([]string{os.Args[1]}, nil)) // Create a prepared query that can be evaluated. query, err := r.PrepareForEval(ctx) if err != nil { log.Fatal(err) } // Load the input document from stdin. var input interface{} dec := json.NewDecoder(os.Stdin) dec.UseNumber() if err := dec.Decode(&input); err != nil { log.Fatal(err) } // Execute the prepared query. rs, err := query.Eval(ctx, rego.EvalInput(input)) if err != nil { log.Fatal(err) } // Do something with the result. fmt.Println(rs) } ``` Run the code above as follows: ``` go run main.go example.rego 'data.example.violation' < input.json ``` Congratulations on making it through the introduction to OPA. If you made it this far you have learned the core concepts behind OPAs policy language as well as how to get OPA and run it on your own. If you have more questions about how to write policies in Rego check out: If you want to try OPA for a specific use case check out: Some popular tutorials include: Dont forget to install the OPA (Rego) Plugin for your favorite IDE or Text Editor Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "Provisioning", "file_name": "download.md", "project_name": "Open Policy Containers", "subcategory": "Security & Compliance" }
[ { "data": "policy is available on Linux, macOS and Windows platforms. These are available as tarballs in the release page. You can install policy via homebrew for macOS or LinuxBrew for Linux: ``` brew install opcr-io/tap/policy``` You can install policy via WinGet on Windows 10+: ``` winget install OpenPolicyContainers.Policy``` You can get policy via a GO install ``` go get -u github.com/opcr-io/policy```" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Open Policy Agent (OPA)", "subcategory": "Security & Compliance" }
[ { "data": "This document helps you get OPA up and running in different deployment environments. You should read this document if you are planning to deploy OPA. Docker makes OPA easy to deploy in different types of environments. This section explains how to use the official OPA Docker images. If this is your first time deploying OPA and you plan to use one of the Docker images, we recommend you review this section to familiarize yourself with the basics. OPA releases are available as images on Docker Hub. If you start OPA outside of Docker without any arguments, it prints a list of available commands. By default, the official OPA Docker image executes the run command which starts an instance of OPA as an interactive shell. This is nice for development, however, for deployments, we want to run OPA as a server. The run command accepts a --server (or -s) flag that starts OPA as a server. See --help for more information on other arguments. The most important command line arguments for OPAs server mode are: By default, OPA listens for normal HTTP connections on 0.0.0.0:8181. To make OPA listen for HTTPS connections, see Security. We can run OPA as a server using Docker: ``` docker run -p 8181:8181 openpolicyagent/opa \\ run --server --log-level debug ``` Test that OPA is available: ``` curl -i localhost:8181/ ``` OPA logs to stderr and the level can be set with --log-level/-l. The default log level is info which causes OPA to log request/response information. ``` {\"clientaddr\":\"[::1]:64427\",\"level\":\"debug\",\"msg\":\"Received request.\",\"reqbody\":\"\",\"reqid\":1,\"reqmethod\":\"GET\",\"reqparams\":{},\"reqpath\":\"/v1/data\",\"time\":\"20.7.13-11T18:22:18-08:00\"} {\"clientaddr\":\"[::1]:64427\",\"level\":\"debug\",\"msg\":\"Sent response.\",\"reqid\":1,\"reqmethod\":\"GET\",\"reqpath\":\"/v1/data\",\"respbytes\":13,\"respduration\":0.392554,\"resp_status\":200,\"time\":\"20.7.13-11T18:22:18-08:00\"} ``` If the log level is set to debug the request and response message bodies will be logged. This is useful for development however it can be expensive in production. ``` {\"addrs\":[\":8181\"],\"insecure_addr\":\"\",\"level\":\"info\",\"msg\":\"First line of log stream.\",\"time\":\"2019-05-08T17:25:26-07:00\"} {\"level\":\"info\",\"msg\":\"Starting decision log uploader.\",\"plugin\":\"decision_logs\",\"time\":\"2019-05-08T17:25:26-07:00\"} {\"clientaddr\":\"[::1]:63902\",\"level\":\"info\",\"msg\":\"Received request.\",\"reqbody\":\"\",\"reqid\":1,\"reqmethod\":\"GET\",\"reqparams\":{},\"reqpath\":\"/v1/data\",\"time\":\"2019-05-08T17:25:41-07:00\"} {\"clientaddr\":\"[::1]:63902\",\"level\":\"info\",\"msg\":\"Sent response.\",\"reqid\":1,\"reqmethod\":\"GET\",\"reqpath\":\"/v1/data\",\"respbody\":\"{\\\"decisionid\\\":\\\"f4b41501-2408-4a14-8269-1c1085abeda4\\\",\\\"result\\\":{}}\",\"respbytes\":66,\"respduration\":2.545972,\"resp_status\":200,\"time\":\"2019-05-08T17:25:41-07:00\"} ``` The default log format is json and intended for production use. For more human readable formats use json-pretty or text. Note: The text log format is not performance optimized or intended for production use. By default, OPA does not include any data or policies. The simplest way to load data and policies into OPA is to provide them via the file system as command line arguments. When running inside Docker, you can provide files via volume mounts. ``` docker run -v $PWD:/example openpolicyagent/opa eval -d /example 'data.example.greeting' ``` policy.rego: ``` package example greeting := msg { info := opa.runtime() hostname := info.env[\"HOSTNAME\"] # Docker sets the HOSTNAME environment variable. msg := sprintf(\"hello from container %q!\", [hostname]) }``` For more information on OPAs command line, see --help: ``` docker run openpolicyagent/opa run --help ``` The Docker Hub repository contains tags for every release of OPA. For more information on each release see the GitHub Releases page. The latest tag refers to the most recent release. The latest tag is convenient if you want to quickly try out OPA however for production deployments, we recommend using an explicit version" }, { "data": "Development builds are also available on Docker Hub. For each version the {version}-dev tag refers the most recent development build for that version. The edge tag refers to the current main branch of OPA. Useful for testing unreleased features. It is not recommended to use edge for production deployments. The version information is contained in the OPA executable itself. You can check the version with the following command: ``` docker run openpolicyagent/opa version ``` This section shows how to quickly deploy OPA on top of Kubernetes to try it out. If you are interested in using OPA to enforce admission control policies in Kubernetes, see the Kubernetes Admission Control Tutorial. These steps assume Kubernetes is deployed with minikube. If you are using a different Kubernetes provider, the steps should be similar. You may need to use a different Service configuration at the end. First, create a ConfigMap containing a test policy. In this case, the policy file does not contain sensitive information so its fine to store as a ConfigMap. If the file contained sensitive information, then we recommend you store it as a Secret. example.rego: ``` package example greeting := msg { info := opa.runtime() hostname := info.env[\"HOSTNAME\"] # Kubernetes sets the HOSTNAME environment variable. msg := sprintf(\"hello from pod %q!\", [hostname]) }``` ``` kubectl create configmap example-policy --from-file example.rego ``` Next, create a Deployment to run OPA. The ConfigMap containing the policy is volume mounted into the container. This allows OPA to load the policy from the file system. deployment-opa.yaml: ``` apiVersion: apps/v1 kind: Deployment metadata: name: opa labels: app: opa spec: replicas: 1 selector: matchLabels: app: opa template: metadata: labels: app: opa name: opa spec: containers: name: opa image: openpolicyagent/opa:0.65.0 ports: name: http containerPort: 8181 args: \"run\" \"--ignore=.*\" # exclude hidden dirs created by Kubernetes \"--server\" \"/policies\" volumeMounts: readOnly: true mountPath: /policies name: example-policy volumes: name: example-policy configMap: name: example-policy ``` ``` kubectl create -f deployment-opa.yaml ``` At this point OPA is up and running. Create a Service to expose the OPA API so that you can query it: service-opa.yaml: ``` kind: Service apiVersion: v1 metadata: name: opa labels: app: opa spec: type: NodePort selector: app: opa ports: name: http protocol: TCP port: 8181 targetPort: 8181 ``` ``` kubectl create -f service-opa.yaml ``` Get the URL of OPA using minikube: ``` OPA_URL=$(minikube service opa --url) ``` Now you can query OPAs API: ``` curl $OPA_URL/v1/data ``` OPA will respond with the greeting from the policy (the pod hostname will differ): ``` { \"result\": { \"example\": { \"greeting\": \"hello from pod \\\"opa-78ccdfddd-xplxr\\\"!\" } } } ``` OPA exposes a /health API endpoint that you can configure Kubernetes Readiness and Liveness Probes to call. For example: ``` containers: name: opa image: openpolicyagent/opa:0.65.0 ports: name: http containerPort: 8181 args: \"run\" \"--ignore=.*\" # exclude hidden dirs created by Kubernetes \"--server\" \"/policies\" volumeMounts: readOnly: true mountPath: /policies name: example-policy livenessProbe: httpGet: path: /health scheme:" }, { "data": "# assumes OPA listens on localhost:8181 port: 8181 initialDelaySeconds: 5 # tune these periods for your environment periodSeconds: 5 readinessProbe: httpGet: path: /health?bundle=true # Include bundle activation in readiness scheme: HTTP port: 8181 initialDelaySeconds: 5 periodSeconds: 5 ``` See the Health API documentation for more detail on the /health API endpoint. OPA uses the standard Go net/http package for outbound HTTP requests that download bundles, upload decision logs, etc. In environments where an HTTP proxy is required, you can configure OPA using the pseudo-standard HTTPPROXY, HTTPSPROXY, and NO_PROXY environment variables. For more information see the Resource Utilization section on the Policy Performance page. Depending on how you deploy OPA, it may or may not have policies available as soon as it starts up. If OPA starts making decisions without any policies, it will return undefined as an answer to all policy queries. This can be problematic because even though OPA returns a response, it has not actually returned the decision dictated by policy. For example, without loading any policies into OPA whatsoever, a policy query will return the answer undefined, which via the HTTP API is represented as an empty JSON object {}. ``` $ opa run -s $ curl localhost:8181/v1/data/foo/bar {} ``` In contrast, when policies are loaded, OPA is operationally ready for policy queries, and the answer is defined, the answer is a JSON object of the form {\"result\": ...} ``` $ opa run foo.rego -s $ curl localhost:8181/v1/data/foo/bar {\"result\": 7} ``` However, it is possible that even though policies have been loaded the policy response is still undefined because the policy makes no decision for the given inputs. ``` $ opa run foo.rego -s $ curl localhost:8181/v1/data/baz {} ``` Just because OPA has returned an answer for a policy query, that does not indicate that it was operationally ready for that query. Moreover, the operational readiness of OPA cannot be ascertained from the query response, as illustrated above. Two issues must therefore be addressed: how to know when OPA is operationally ready for policy queries and how to make a decision before OPA is ready. The relevance of the discussion above depends on how you have chosen to deploy policies into OPA. If you deploy policies to OPA on disk (e.g. volume mounting into the OPA container on Kubernetes), then OPA will only start answering policy queries once all the policies are successfully loaded. In this case, it is impossible for OPA to answer policy queries before it has loaded policy, so the discussion above is a non-issue. On the other hand, if you use the Bundle service OPA will start up without any policies and immediately start downloading a bundle. But even before the bundle has successfully downloaded, OPA will answer policy queries if asked (which is in every case except the bootstrap case the right thing to" }, { "data": "For this reason, OPA provides a /health API that verifies that the server is operational and optionally that a bundle has been successfully activated. As long as no policy queries are routed to OPA until the /health API verifies that OPA is operational. The recommendation is to ensure the /health API indicates that OPA is operational before routing policy queries to it. Finally, you might choose to push policies into OPA via its REST API. In this case, there is no way for OPA to know whether it has a complete policy set, and so the decision as to when to route policy queries to OPA must be handled by whatever software is pushing policies into OPA. The mechanisms discussed above ensure that OPA is not asked to answer policy queries before it is ready to do so. But from the perspective of the software needing decisions, until OPA is operational, the software must make a decision on its own. Typically there are two choices: The choices are more varied if the policy is not making an allow/deny decision, but often there is some analog to fail-open and fail-closed. The key observation is that this logic is entirely the responsibility of the software asking OPA for a policy decision. Despite the fact that what to do when OPA is unavailable is technically a policy question, it is one that we cannot rely on OPA to answer. The right logic can depend on many factors including the likelihood of OPA not making a decision and the cost of allowing or denying a request incorrectly. In Kubernetes admission control, for example, the Kubernetes admin can choose whether to fail-open or fail-closed, leaving the decision up to the user. And often this is the correct way to build an integration because it is unlikely that there is a universal solution. For example, running an OPA-integration in a development environment might require fail-open, but running exactly the same integration in a production environment might require fail-closed. OPA now supports a capabilities check on policies. The check allows callers to restrict the built-in functions that policies may depend on. If the policies passed to OPA require built-ins not listed in the capabilities structure, an error is returned. The capabilities check is currently supported by the check and build sub-commands and can be accessed programmatically on the ast.Compiler structure. The OPA repository includes a set of capabilities files for previous versions of OPA in the capabilities folder. For example, given the following policy: ``` package example deny[\"missing semantic version\"] { not validsemanticversion_tag } validsemanticversion_tag { semver.is_valid(input.version) } ``` We can check whether it is compatible with different versions of OPA: ``` $ opa build ./policies/example.rego --capabilities ./capabilities/v0.22.0.json $ opa build ./policies/example.rego --capabilities ./capabilities/v0.21.1.json ``` The build command can validate policies against a configurable set of OPA capabilities. The capabilities define the built-in functions and other language features that policies may depend" }, { "data": "For example, the following capabilities file only permits the policy to depend on the plus built-in function (+): ``` { \"builtins\": [ { \"name\": \"plus\", \"infix\": \"+\", \"decl\": { \"type\": \"function\", \"args\": [ { \"type\": \"number\" }, { \"type\": \"number\" } ], \"result\": { \"type\": \"number\" } } } ] } ``` The following command builds a directory of policies (./policies) and validates them against capability-built-in-plus.json: ``` opa build ./policies --capabilities ./capability-built-in-plus.json ``` When passing a capabilities definition file via --capabilities, one can restrict which hosts remote schema definitions can be retrieved from. For example, a capabilities.json containing the json below would disallow fetching remote schemas from any host but kubernetesjsonschema.dev. Setting allow_net to an empty array would prohibit fetching any remote schemas. capabilities.json ``` { \"builtins\": [ ... ], \"allow_net\": [ \"kubernetesjsonschema.dev\" ] } ``` Not providing a capabilities file, or providing a file without an allow_net key, will permit fetching remote schemas from any host. Note that the metaschemas http://json-schema.org/draft-04/schema, http://json-schema.org/draft-06/schema, and http://json-schema.org/draft-07/schema, are always available, even without network access. Similarly, the allownet capability restricts what hosts the http.send built-in function may send requests to, and what hosts the net.lookupip_addr built-in function may resolve IP addresses for. Some features of OPA can be toggled on and off through the features list: ``` { \"features\": [ \"ruleheadrefstringprefixes\", \"ruleheadrefs\", \"regov1import\" ] } ``` Features present in the list are enabled, while features not present are disabled. The following features are available: The availability of future keywords in an OPA version can also be controlled using the capabilities file: ``` { \"future_keywords\": [ \"in\" ] } ``` With these capabilities, the future import future.keywords.in would be available. See the documentation of the membership and iteration operator for details. A specific OPA versions capabilities file shows which Wasm ABI versions it is capable of evaluating: ``` { \"wasmabiversions\": [ { \"version\": 1, \"minor_version\": 1 }, { \"version\": 1, \"minor_version\": 2 } ] } ``` This snippet would allow for evaluating bundles containing Wasm modules of the ABI version 1.1 and 1.2. See the ABI version docs for details. Use the following JSON structure to build more complex capability checks. ``` { \"builtins\": [ { \"name\": \"name\", // REQUIRED: Unique name of built-in function, e.g., <name>(arg1,arg2,...,argN) \"infix\": \"+\", // OPTIONAL: Unique name of infix operator. Default should be unset. \"decl\": { // REQUIRED: Built-in function type declaration. \"type\": \"function\", // REQUIRED: states this is a function \"args\": [ // REQUIRED: List of types to be passed in as an argument: any, number, string, boolean, object, array, set. { \"type\": \"number\" }, { \"type\": \"number\" } ], \"result\": { // REQUIRED: The expected result type. \"type\": \"number\" } } } ], \"allownet\": [ // OPTIONAL: allownet is an array of hostnames or IP addresses, that an OPA instance is allowed to connect to. \"mycompany.com\", \"database.safe\", ], \"future_keywords\": [ \"in\" ] } ``` Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "Provisioning", "file_name": "intro.md", "project_name": "Open Policy Containers", "subcategory": "Security & Compliance" }
[ { "data": "The Open Policy Containers project makes it easy to build Open Policy Agent policies into OCI images. A policy that is packaged as an OCI image can be tagged just like any container image. It can also be signed using tools like cosign. To get started you need two things: Container registries that policy has been tested with include: To get started, you'll need to download the policy CLI. Follow our 5 minute tutorial to get a flavor for what policy can do for you." } ]
{ "category": "Provisioning", "file_name": "modeling.md", "project_name": "OpenFGA", "subcategory": "Security & Compliance" }
[ { "data": "This section has guides, concepts and examples that help you define an authorization model. The content in this section is useful: How to create an authorization model for your system starting from the requirements. Learn the basics of modeling authorization and granting access to users. Learn to model user group membership, and to grant access to all members of a group. Learn to model roles for users at the object level and model permissions for those roles. Learn to model access based on parent-child relationships, e.g.: folders and documents. Learn to model denying access if users are part of list of blocked users. Learn to model giving everyone specific access to an object, e.g.: everyone can read. Learn to model requiring multiple privileges before granting access. Learn to model custom roles that are created by users. Learn to model requiring dynamic attributes. Learn to model and authorize when IP Address, time, and other dynamic and contextual restrictions are involved. Learn to model and authorize when a user belongs to multiple organizations. Learn the underlying concepts/building blocks that can be used to build any model. Explore advanced use cases and patterns for authorization modeling with OpenFGA. Learn to migrate relations and models in a production environment." } ]
{ "category": "Provisioning", "file_name": "fga.md", "project_name": "OpenFGA", "subcategory": "Security & Compliance" }
[ { "data": "This section has guides, concepts and examples that help you define an authorization model. The content in this section is useful: How to create an authorization model for your system starting from the requirements. Learn the basics of modeling authorization and granting access to users. Learn to model user group membership, and to grant access to all members of a group. Learn to model roles for users at the object level and model permissions for those roles. Learn to model access based on parent-child relationships, e.g.: folders and documents. Learn to model denying access if users are part of list of blocked users. Learn to model giving everyone specific access to an object, e.g.: everyone can read. Learn to model requiring multiple privileges before granting access. Learn to model custom roles that are created by users. Learn to model requiring dynamic attributes. Learn to model and authorize when IP Address, time, and other dynamic and contextual restrictions are involved. Learn to model and authorize when a user belongs to multiple organizations. Learn the underlying concepts/building blocks that can be used to build any model. Explore advanced use cases and patterns for authorization modeling with OpenFGA. Learn to migrate relations and models in a production environment." } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "Paralus", "subcategory": "Security & Compliance" }
[ { "data": "Eliminate the burden of having to configure and manage Kubernetes Role Based Access Control cluster by cluster. Streamline and consolidate access control for your fleet of clusters spanning different operating environments, different clouds (EKS, AKS, etc.), and on-premises data centers operating behind firewalls. A single login gives authorized users (e.g., developers, operations, contractors, etc.) seamless and secure access to all clusters with a native and familiar kubectl experience. Comply with security policies with completely private datapaths. Reduce complexity and eliminate the risk of security breaches and loss in productivity. Most of the global workforce today is working remotely. They access sensitive data using their devices from anywhere in the world. Hence, organizations need to implement a security framework that adapts to this. Comes in Zero Trust. A modern security framework that does away with implicit trust and provides fine grained control over your applications and infrastructure. The concept of Zero Trust is governed by 3 principles: As a developer, you should adopt Zero Trust principles to keep the users safe and applications secured on your Kubernetes clusters. Paralus is a handy tool for you to manage access to your Kubernetes clusters. It allows you to configure kubectl access for your teams at organization, project and namespace level. It works seamlessly with Kubernetess native RBAC support and integrates seamlessly with SSO. You can configure these settings using an intuitive UI or use our command line utility pctl. Wondering where to start? Check out our Installation docs to get started in minutes. We are a Cloud Native Computing Foundation sandbox project. The Linux Foundation (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see Trademark Usage." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Paralus", "subcategory": "Security & Compliance" }
[ { "data": "Eliminate the burden of having to configure and manage Kubernetes Role Based Access Control cluster by cluster. Streamline and consolidate access control for your fleet of clusters spanning different operating environments, different clouds (EKS, AKS, etc.), and on-premises data centers operating behind firewalls. A single login gives authorized users (e.g., developers, operations, contractors, etc.) seamless and secure access to all clusters with a native and familiar kubectl experience. Comply with security policies with completely private datapaths. Reduce complexity and eliminate the risk of security breaches and loss in productivity. Most of the global workforce today is working remotely. They access sensitive data using their devices from anywhere in the world. Hence, organizations need to implement a security framework that adapts to this. Comes in Zero Trust. A modern security framework that does away with implicit trust and provides fine grained control over your applications and infrastructure. The concept of Zero Trust is governed by 3 principles: As a developer, you should adopt Zero Trust principles to keep the users safe and applications secured on your Kubernetes clusters. Paralus is a handy tool for you to manage access to your Kubernetes clusters. It allows you to configure kubectl access for your teams at organization, project and namespace level. It works seamlessly with Kubernetess native RBAC support and integrates seamlessly with SSO. You can configure these settings using an intuitive UI or use our command line utility pctl. Wondering where to start? Check out our Installation docs to get started in minutes. We are a Cloud Native Computing Foundation sandbox project. The Linux Foundation (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see Trademark Usage." } ]
{ "category": "Provisioning", "file_name": "#member-management.md", "project_name": "Permit.io", "subcategory": "Security & Compliance" }
[ { "data": "Managing your workspace efficiently is integral to optimizing your workflow and productivity. Within Permit, we provide intuitive tools and features designed to help you seamlessly manage your team and their access, and maintain logs of all your essential tasks. The team management tool ensures controlled access for team members. Depending on their roles, members might modify policies through the dashboard or the API. For workspace access control, navigate to the Settings screen and choose the Members tab. When adding a new member into your workspace, an invitation is sent to that user. As part of the invitation, you need to specify whether the user will be assigned a Workspace Owner role or a Workspace Editor role. A user can have access to an entire workspace, or they can have access to individual projects and/or environments. If you want a member to have different permissions for a particular project or environment, you can invite them directly into that specific project and environment with a new role. Their workspace owner permissions will continue to remain unchanged elsewhere. If you don't invite a user into your workspace as a member, but only invite them directly into a specific project or environment, they would only be part of those projects, and their access level will be Mixed, if its more than two projects with different access. Shaul, one of our talented engineers, has selective access within our workspace. While he has comprehensive access to the Shopping App project as its Project Owner, encompassing all its environments, his privileges are more limited in the Internal App project, where he's an Environment Editor for just the dev environment. You can either invite members individually or send bulk invitations to your workspace. However, currently, we can't send bulk invites with varied roles. If you want some users to be workspace owners and others to be workspace editors, you'll need to handle this in two separate invites. You can change the roles for each member, and for each project and environment they are assigned to, directly from the Members page. If a user is part of more than two projects, but not of the workspace, you can also give him access to the entire workspace, selecting either the Workspace Editor or Workspace Owner role. By default, when you invite a new member to your workspace, they will be sent an email with a link to accept the" }, { "data": "While the invite is still pending, meaning the user has not yet accepted the invite, they will be shown as a blank user. If for some reason the user has not received the email, you can manually copy the invitation link and send it directly to the user. Each link is bespoke to every added user. The API Keys screen lets you create and oversee your API Keys. Using these keys, you can make requests to the Permit REST API at organizational, project and environment level. For API key management, simply navigate to the Settings on the Sidebar and click on the API Keys tab. In the hierarchy of key management, Organization Keys reside at the top, governing all organizational assets and projects. They are primarily accessible by top-level administrators, ensuring comprehensive oversight and configurations at a broad level. One step below are the Project Keys, which are tailored to individual projects or products. These keys facilitate granular control over specific projects, and their access is typically granted to project managers and the relevant team members. At the most specific level, Environment Keys manage distinct project phases like Development, Testing, or Production. Developers and operations teams usually manage these, adapting configurations to suit each environment's unique needs. Environment-level API keys are automatically generated for new environments. The activity log screen shows the Permit.io internal audit log - who did what within your Permit.io workspace. This log will only show changes made via the Permit.io dashboard. If you wish to see changes made via the Permit.io REST API, you should check out the API log. To view the activity log, go to the Settings screen on the Sidebar and select the Activity Log tab. Only workspace owners may view activity logs. If you cannot see this screen, contact the workspace owner (typically the person who opened the account). To pinpoint a particular activity in the log carried out by a team member, simply refine your search by selecting the specific date of the action and/or filtering by the individual who made the changes. This targeted approach ensures that tracking and reviewing internal operations remain both efficient and thorough, enhancing accountability within the team. To view the API log, go to the Settings screen on the Sidebar and select the API Log tab. Only workspace owners may view the API log. If you cannot see this screen, contact the workspace owner (typically the person who opened the account). You have the ability to delve deeper into individual API logs to examine the complete request and response details." } ]
{ "category": "Provisioning", "file_name": "tutorial.md", "project_name": "Open Policy Containers", "subcategory": "Security & Compliance" }
[ { "data": "Sign in to your container registry. This example is written for ghcr.io. More detailed instructions on other registries can be found here. ``` $ echo $PAT| policy login -s ghcr.io -u <github-account> --password-stdin``` If you already have an OPA policy: ``` $ cd <directory that has an OPA project - rego files / data files>``` Or, if you want to play with a sample OPA policy: ``` $ git clone https://github.com/aserto-demo/policy-peoplefinder$ cd policy-peoplefinder``` ``` $ policy build ./src -t <registry>/<your_organization>/peoplefinder:1.0.0Created new image.digest: sha256:84dbd4e3b5572dd2f23c3c987c89443fdcb57af87d714ea296fc552192fb17e9Tagging image.reference: ghcr.io/ogazitt/peoplefinder:1.0.0``` ``` $ policy images| REPOSITORY | TAG | SIZE | CREATED AT |||-|-|--|| ghcr.io/ogazitt/peoplefinder | 1.0.0 | 555 B | 1 minute ago |``` ``` $ policy push ghcr.io/ogazitt/peoplefinder:1.0.0Resolved ref [ghcr.io/ogazitt/peoplefinder:1.0.0].digest: sha256:84dbd4e3b5572dd2f23c3c987c89443fdcb57af87d714ea296fc552192fb17e9Pushed ref [ghcr.io/ogazitt/peoplefinder:1.0.0].digest: sha256:05e6ed84d86f6a252e24f33cb12138d9193780f1d89a1b2ff14ced315fdf8481``` ``` $ policy tag ghcr.io/ogazitt/peoplefinder:1.0.0 ghcr.io/ogazitt/peoplefinder:latestTagging image.reference: ghcr.io/ogazitt/peoplefinder:latest``` Check out the new tag: ``` $ policy images| REPOSITORY | TAG | SIZE | CREATED AT |||--|-||| ghcr.io/ogazitt/peoplefinder | 1.0.0 | 555 B | 5 minutes ago || ghcr.io/ogazitt/peoplefinder | latest | 555 B | 5 minutes ago |``` ``` $ policy rm ghcr.io/ogazitt/peoplefinder:1.0.0Removed reference.reference: ghcr.io/ogazitt/peoplefinder:1.0.0``` It's gone! ``` $ policy images| REPOSITORY | TAG | SIZE | CREATED AT |||--|-||| ghcr.io/ogazitt/peoplefinder | latest | 555 B | 6 minutes ago |``` ``` $ policy pull ghcr.io/ogazitt/peoplefinder:1.0.0Pulling.ref: ghcr.io/ogazitt/peoplefinder:1.0.0Pulled ref [ghcr.io/ogazitt/peoplefinder:1.0.0].digest: sha256:84dbd4e3b5572dd2f23c3c987c89443fdcb57af87d714ea296fc552192fb17e9``` It's back! ``` $ policy images| REPOSITORY | TAG | SIZE | CREATED AT |||--|-||| ghcr.io/ogazitt/peoplefinder | latest | 555 B | 7 minutes ago || ghcr.io/ogazitt/peoplefinder | 1.0.0 | 555 B | 7 minutes ago |``` ``` $ policy save ghcr.io/ogazitt/peoplefinderResolved ref [ghcr.io/ogazitt/peoplefinder:1.0.0].digest: sha256:84dbd4e3b5572dd2f23c3c987c89443fdcb57af87d714ea296fc552192fb17e9``` You should now have a bundle.tar.gz file in your directory. Issue these commands to generate a key pair, sign the policy image, and verify its signature using the public key: ``` $ docker login -u <GitHub-account> ghcr.io$ cosign initialize$ cosign generate-key-pair$ cosign sign --key cosign.key ghcr.io/ogazitt/peoplefinder:1.0.0$ cosign verify --key cosign.pub ghcr.io/ogazitt/peoplefinder:1.0.0``` For more detailed instructions, see the signing images docs. ``` $ policy repl ghcr.io/ogazitt/peoplefinder:1.0.0Pulling.ref: ghcr.io/ogazitt/peoplefinder:1.0.0Pulled ref [ghcr.io/ogazitt/peoplefinder:1.0.0].digest: sha256:84dbd4e3b5572dd2f23c3c987c89443fdcb57af87d714ea296fc552192fb17e9running policy [ghcr.io/ogazitt/peoplefinder:1.0.0]> data.system.bundles{ \"/Users/ogazitt/.policy/policies-root/blobs/sha256/84dbd4e3b5572dd2f23c3c987c89443fdcb57af87d714ea296fc552192fb17e9\": { \"manifest\": { \"revision\": \"\", \"roots\": [ \"peoplefinder\" ] } }}>```" } ]
{ "category": "Provisioning", "file_name": "how-does-it-work#permits-hybrid-architecture.md", "project_name": "Permit.io", "subcategory": "Security & Compliance" }
[ { "data": "One of the main challenges of implementing authorization properly is making sure it can evolve along with your application. As our apps requirements evolve (From a simple Admin - Non-Admin logic into roles, attributes, relationships, and more), it becomes very difficult to maintain a consistent authorization layer without cumbersome spaghetti code. Here's how Permit can help - A best practice utilized to separate our apps authorization code from the actual application code. Open-source Policy Engines such as Open Policy Agent (OPA) and AWS Cedar provide an excellent baseline for creating such a microservice. Still, this authorization microservice requires a lot of maintenance work, especially around connecting the microservice to the application, its policy sources, the data it needs, and the access-control interfaces we need to build on top. Thats where Permit comes in - utilizing these existing open source solutions, Permit provides you with a microservice for authorization based on your Policy Engine of choice, an administration layer, OPAL, which keeps the policy engine up to date with the latest policy and data updates, a set of SDKs per your language of choice, and a cloud service to manage it all. The Permit architecture consists of two main parts, a Control Plane and a Data Plane: The Data Plane stores all the actual data required to make authorization decisions. This includes authorization policies, names, emails, etc. The Control Plane includes the relationships between various entities required to make authorization decisions (User IDs, Roles, Attributes, etc.). Basically, the Control plane, through which you make changes and updates to your authorization layer, is managed via Permits Cloud Service, while the Data Plane can be fully kept and managed within your own VPC / Network. This means you can manage your authorization layer with Permit without the need to expose any of your data to the cloud. Some of the benefits of this architecture are: No sensitive data leaves your network/cloud, ensuring your apps security and compliance. Authorization decisions are made on your side with zero latency. You are not dependent on Permits availability to make authorization" }, { "data": "Two main components enable this hybrid architecture - OPAL, and the Permit PDP: Open Policy Administration Layer (OPAL) is an open-source project developed and maintained by the Permit.io team. It serves as an administration layer for Policy Engines detecting changes to both policy and policy data in real-time and pushing live updates to your agents. OPAL consists of two elements - The OPAL Server and the OPAL Client The OPAL Server is hosted as part of Permits Could Service. It Creates a Pub/Sub channel for OPAL clients to subscribe to, tracks a Git repository (via webhook/polling) for policy updates, and pushes those updates to clients (as diffs - only updating changes, not the entire thing). The OPAL Client is deployed as part of the PDP - A PDP is a network node responsible for answering authorization queries using policies and contextual data. The PDP provided to you by Permit acts as your microservice for authorization and is deployed as a sidecar to your own services. Permits PDP consists of a Policy Engine and the OPAL Client: The Policy Engine is in charge of evaluating authorization queries, using the policy rules as a source of truth. Authorization policies are written in Policy Languages (such as Rego or Cedar), which the policy engine interprets, providing a decision to any authorization query it is presented with. Permit is policy engine agnostic, currently supporting Open Policy Agent and AWS Cedar, (With support for more policy engines coming soon), allowing you to choose the one most suitable for your needs. The OPAL Client is deployed alongside the policy agent and keeps it up to date with the latest policy and data. It does so by subscribing to topic-based Pub/Sub updates for both data and policy. Policy and data are fetched from the OPAL Server (Hosted in Permits Clout Service) and any other relevant sources (e.g., DBs, APIs, 3rd party services). The combination of Permits Cloud Service, OPAL, and the PDP allows you to manage changes to your authorization layer via Permit (using the UI/API), and have these changes propagated instantly into your application - all without your data ever having to leave your network/cloud." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Permit.io", "subcategory": "Security & Compliance" }
[ { "data": "Permit.io is a cloud-based authorization service that allows you to define and manage permissions for your application. In order to make it easier and safer to manage your objects and policies in Permit.io, we have created a Terraform provider. The Terraform provider is open source and available on GitHub. ``` terraform { required_providers { permitio = { source = \"registry.terraform.io/permitio/permit-io\" version = \"~> 0.0.1\" } }}``` ``` provider \"permitio\" { apiurl = \"https://api.permit.io\" # Defaults to - \"https://api.permit.io - Can be set as an environment variable PERMITIOAPIURL apikey = \"YOURAPIKEY\" # Can be set as an environment variable PERMITIOAPIKEY}``` ``` resource \"permitio_resource\" \"document\" { key = \"document\" name = \"Document\" description = \"A confidential document\" actions = { \"read\" : { \"name\" : \"Read\", \"description\" : \"Read a document\", }, \"write\" : { \"name\" : \"Write\", \"description\" : \"Write a document\", } }}``` ``` resource \"permitiorole\" \"reader\" { key = \"reader\" name = \"Reader\" description = \"A role that allows reading documents\" permissions = [ \"document:read\" ] extends = [] dependson = [ permitio_resource.document # This is required to ensure that the resource is created before the role (for the permissions assignment) ]}``` ``` resource \"permitioresourceset\" \"secretdocs\" { key = \"secretdocs\" name = \"Secret Docs\" resource = permitio_resource.document.key conditions = jsonencode({ \"allOf\" : [ { \"allOf\" : [ { \"resource.title\" = { contains = \"Rye\" }, } ] } ] })}``` ``` resource \"permitiouserset\" \"privilegedusers\" { key = \"privilegedusers\" name = \"Privileged Users\" conditions = jsonencode({ \"allOf\" : [ { \"allOf\" : [ { \"subject.email\" = { contains = \"@admin.com\" }, } ] } ] })}``` ``` resource \"permitioconditionsetrule\" \"allowprivilegeduserstoreadsecretdocs\" { userset = permitiouserset.privilegedusers.key resourceset = permitioresourceset.secret_docs.key permission = \"document:read\"}``` ``` resource \"permitioproxyconfig\" \"foaz\" { key = \"foaz\" name = \"Boaz\" authmechanism = \"basic\" authsecret = { basic = \"hello:world\" } mappingrules = [ { url = \"https://example.com/documents\" httpmethod = \"post\" resource = \"document\" action = \"read\" }, { url = \"https://example.com/documents/{projectid}\" httpmethod = \"get\" resource = \"document\" action = \"read\" }, { url = \"https://example.com/documents/{projectid}\" httpmethod = \"put\" resource = \"document\" action = \"update\" headers = { \"x-update-id\": \"foaz\" } }, { url = \"https://example.com/documents/{projectid}\" httpmethod = \"delete\" resource = \"document\" action = \"delete\" } ]}```" } ]
{ "category": "Provisioning", "file_name": "docs.permit.io.md", "project_name": "Permit.io", "subcategory": "Security & Compliance" }
[ { "data": "Here you'll find comprehensive guides and resources to help you get started with Permit and explore our platform. Learn what Permit is and how it works Best practices and how-to guides Explore the supported SDKs and frameworks Integrate Permit with your favorite tools Learn how to manage multi-tenancy with Permit GitOps-ready Permission Management Manage you policy with Terraform Pre-built UI components for your app Relationship-based Access Control Attribute-based Access Control" } ]
{ "category": "Provisioning", "file_name": "types-and-filtering.md", "project_name": "Permit.io", "subcategory": "Security & Compliance" }
[ { "data": "Knowing who has done what in your application is critical for multiple reasons, including security, compliance, debugging and even just plain old monitoring. Audit Logs are a great way to track who did what, when they did it, and why they were, or were not granted the permissions. They are a useful tool both for you and your team as the maintainers of your application, but also your end-users, who want to track their own usage and actions within the app. As Permit.io powers your application, you are getting automatic audit logs. Enforcement points, along with resource and action definitions, and user/tenant configurations automatically come together to generate the full picture needed for an audit log entry. You'll find that the Permit web interface hosts a specialized page where you can conveniently view your audit logs. Alternatively, access them directly in your deployed PDP microservice. With logs being written to stdout/stderr, it's easy to directly integrate them with your preferred logging platform. There are two types of audit logs: Navigate to the Audit Log page in the Permit dashboard. We periodically delete old audit logs to maintain system efficiency. If you need access to older logs or would like to extend log retention periods for your account, please reach out to us via Slack or email us at help@permit.io. You can filter audit logs based on user, date, decision & tenant. Max number of results for this api is 10,000 To obtain additional results readjust the query's time window, or apply additional filters Every time a permission check takes place, not only are we logging the details of who initiated a particular action on a resource and whether it was approved or denied, but we also provide a detailed decision log. A decision log represents the real-time status of the ongoing decision, sourced directly from the OPA (Open Policy Agent) engine and presented in a user-friendly format by our system. Furthermore, we take an extra step to analyze each decision log closely to furnish you with a comprehensive, human-readable explanation that delineates the underlying reasons behind the outcome. This effort is designed to facilitate a seamless understanding of why a particular permission was granted or denied, enhancing transparency and efficiency. Below, you can find an example of what a human-readable reason looks like inside of each decision log. ``` \"reason\": \"user 'c2fc4a07-7edf-48fb-8fd8-e4989e103a77' does not match any rule that grants him the 'delete' permission on the given resource of type 'task'\"```" } ]
{ "category": "Provisioning", "file_name": "overview.md", "project_name": "Permit.io", "subcategory": "Security & Compliance" }
[ { "data": "As a best practice Permit.io encourages you to manage your authorization policy as code; by managing policy as code. There are many benefits to having policy as code, including improved consistency, accuracy, and traceability. Defining policies using code provides you with the ability to ensure policies are consistently enforced across different systems and environments, which can help prevent policy violations and reduce the risk of unauthorized access to sensitive data or systems. Policy as code allows you to more easily manage and update. When managed as code, policies can be managed using the same tools and processes used to manage and deploy software. This makes it easier to track changes to policies over time, roll back changes if necessary, and in general, enjoy the well-thought-through best practices of the code world (e.g., GitOps). In short policy as code saves us from reinventing the wheel. While policy-as-code should be managed in a code repository, that doesn't mean it must be authored as pure code. By simplifying policy creation, we can make our work as developers easier, free ourselves from becoming bottlenecks, and empower other critical stakeholders (e.g., product managers, security, compliance, support, professional services, and sales) to participate in the policy creation process. Permit.io's low-code policy-editor generates code for you (primarily OpenPolicyAgent Rego code - see example here) The policy code generated by the policy-editor is saved into a Git repository before being deployed to your PDP (Aka your microservice for authorization). You can own and manage this Git repository - allowing you full control of the code there, as well as setting a CI process between Permit environments. Adding tests, benchmarks, code-reviews, and adding more manual code - provide you with all the checks and balances you need before merging changes between Git branches (synced into different Permit environments) The feature is available in trial to all Permit users as a self-service. Please follow the guide here. Please note that the repository does not have to be the main code repository you use. The policy code can be nested in different folders within the repository. The Rego code you add can both import and refer to the code that is automatically generated." } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Portshift", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "Portshift", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Pluto", "subcategory": "Security & Compliance" }
[ { "data": "Common Use Cases Features Technical Details You're currently viewing the documenation on GitHub. Links, images, and other features will not work properly. To view the full docs, visit insights.docs.fairwinds.com Fairwinds Insights (opens new window) is a software for platform teams running Kubernetes to enable development. A policy engine, Fairwinds Insights provides workload configuration feedback loops, detects and prioritizes container security vulnerabilities and provides Kubernetes cost allocation and rightsizing. Fairwinds Insights runs across the entire development lifecycle, from CI to Admission to Production. It enables platform teams to find and prevent configuration problems as applications move from development to production. It provides out-of-the-box integrations into CI/CD workflows, an Admission Controller for enforcing custom policies (using Polaris or Open Policy Agent) at pre-deployment, and automation for running Kubernetes auditing tools. Findings and recommendations are stored in a single location, enabling operators to gain visibility and control over multiple Kubernetes clusters, track and prioritize issues, and monitor the security and cost of Kubernetes workloads. The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack (opens new window) or join the user group (opens new window) to get involved! Enjoying insights-docs? Check out some of our other projects: Or check out the full list (opens new window) Getting Value from Fairwinds Insights" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "Portshift", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "RBAC Lookup", "subcategory": "Security & Compliance" }
[ { "data": "GoNoGo is a utility to help users determine upgrade confidence around Kubernetes cluster addons. At this time, GoNoGo is currently in alpha. This means that we could change literally anything at any time without notice. Keep an eye out for major changes, and hopefully a v1 release at some point. A number factors can affect whether the upgrade of an addon (like cert-manager, nginx ingress, etc) will be successful or not. For example, some addon upgrades require a specific api to be available in the cluster, or a specific version of the Kubernetes cluster in general. Or perhaps an addon has deprecated a particular annotation and you want to make sure your upgraded addon doesn't include those deprecated annotations. Rather than having to manually assess each addon, GoNoGo enables you to create a specification (called a bundle spec) that you can populate with checks for the upgraded version, and run those against your cluster to get an upgrade confidence score. For example, cert-manager changed a number of annotations (opens new window) in the upgrade from 0.10 to 0.11. With GoNoGo you can add an OPA check to your bundle spec looking for instances of that annotation in the affected cluster resources and be warned about it before you do the upgrade. ``` gonogo --help The Kubernetes Add-On Upgrade Validation Bundle is a spec that can be used to define and then discover if an add-on upgrade is safe to perform. Usage: gonogo [flags] gonogo [command] Available Commands: check Check for Helm releases that can be updated completion Generate the autocompletion script for the specified shell help Help about any command version Prints the current version of the tool. Flags: -h, --help help for gonogo -v, --v Level number for the log level verbosity Use \"gonogo [command] --help\" for more information about a command. ``` Pass in a bundle spec with the addon definitions that you want to check ``` gonogo check /path/to/bundle/bundle.yaml ``` The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack (opens new window) or join the user group (opens new window) to get involved! Enjoying Gonogo? Check out some of our other projects: Or check out the full list (opens new window) Quickstart" } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Portshift", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Prisma Cloud", "subcategory": "Security & Compliance" }
[ { "data": "Configure quantum-resistant IKEv2 VPNs to start becoming quantum ready. The Palo Alto Networks post-quantum cryptography solution is based on open standards, so you can deploy the solution as soon as youre ready. A new subscription to operate new domain detectors in the Advanced DNS Security cloud that inspect changes in DNS responses to detect various types of DNS hijacking in real-time. Your new NetSec homepage; it is your first stop to assess the health, security, and efficiency of your network. In a single view, the command center shows you all users and IoT devices accessing the internet, SaaS applications, and private apps, and how Prisma Access, your NGFWs, and your security services are protecting them. 2024 Palo Alto Networks, Inc. All rights reserved." } ]
{ "category": "Provisioning", "file_name": "home.html.md", "project_name": "Rudder", "subcategory": "Security & Compliance" }
[ { "data": "Discover Rudder What is RudderDiscover Rudder features and use cases Online demoExplore the Web interface with an online demo Get started guideGet started with Rudder on a test platform Install Rudder RequirementsRequirement for Rudder installation Install Rudder serverInstall a Rudder root server to define your configuration policies Install Rudder agentInstall a Rudder agent to apply configuration Use Rudder User manualReference documentation API referenceTo automate actions and extract information Rudder by examplefor real and detailed use-case examples Join the community Chat with the communityon our Gitter channel Ask questions on ServerFault or on our user mailing-list Report an issue or propose a featureon our issue tracker This page was built using the Antora default UI. The source code for this UI is licensed under the terms of the MPL-2.0 license." } ]
{ "category": "Provisioning", "file_name": "index.html.md", "project_name": "Rudder", "subcategory": "Security & Compliance" }
[ { "data": "This document will guide you through the installation of a test Rudder platform and the configuration of a real use-case, explaining how to: Define your configuration policies Manage machines with Rudder Apply and control the enforcement of configuration policies | 0 | 1 | |-:|:-| | nan | No prior knowledge about configuration management is required, only basic Unix command-line skills. | No prior knowledge about configuration management is required, only basic Unix command-line skills. Lets start by installing the test environment. This page was built using the Antora default UI. The source code for this UI is licensed under the terms of the MPL-2.0 license." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "RBAC Manager", "subcategory": "Security & Compliance" }
[ { "data": "Contributing Command line helper for helm. This utility adds to the functionality of Helm (opens new window) in multiple ways: Helm2 is untested as of v4.3.0. The maintainers of helm have deprecated helm2 (opens new window). In course.yml, write: ``` namespace: default charts: grafana: namespace: grafana values: image: tag: \"6.2.5\" polaris-dashboard: namespace: polaris-dashboard repository: git: https://github.com/FairwindsOps/charts path: stable chart: polaris ``` Then run: ``` reckoner plot course.yml --run-all ``` Grafana and Polaris should now be installed on your cluster! Importing existing releases is experimental and the result should be reviewed. If you're already using Helm but want to start using reckoner, you can use reckoner import to facilitate your migration. We recommend carefully examining the output of a reckoner diff before relying on any imported course.yml definitions. The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack (opens new window) or join the user group (opens new window) to get involved! Enjoying Reckoner? Check out some of our other projects: Or check out the full list (opens new window) Usage" } ]
{ "category": "Provisioning", "file_name": "cascading-scans.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "The Azure Monitor persistenceProvider hook saves all findings and reports into the configured Azure Monitor workspace using the Data Collector API. This allows working with the data in Azure Monitor or Microsoft Sentinel to configure alerting based on new findings. It will create a custom log type for every scantype titled SCB[scantypename]. Installing the Azure Monitor persistenceProvider hook will add a ReadOnly Hook to your namespace. The persistence-azure-monitor chart can be deployed via helm: ``` Kubernetes: >=v1.11.0-0 The hook requires the Azure Monitor Workspace ID and its Primary Key for authentication. For details on how to find them, see this page. Create a Kubernetes secret with these values using ``` Then, configure the hook to use this secret when installing it: ``` helm upgrade --install persistence-azure-monitor . --wait \\ --set=\"monitor.authentication.apiKeySecret=\"azure-monitor\"\"``` | Key | Type | Default | Description | |:|:-|:--|:-| | hook.affinity | object | {} | Optional affinity settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | hook.env | list | [] | Optional environment variables mapped into the hook (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | hook.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.extraVolumes | list | [] | Optional Volumes mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | hook.image.repository | string | \"docker.io/securecodebox/hook-persistence-azure-monitor\" | Hook image repository | | hook.image.tag | string | defaults to the charts version | Container image tag | | hook.labels | object | {} | Add Kubernetes Labels to the hook definition | | hook.priority | int | 0 | Hook priority. Higher priority Hooks are guaranteed to execute before low priority Hooks. | | hook.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the hook container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | hook.tolerations | list | [] | Optional tolerations settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | hook.ttlSecondsAfterFinished | string | nil | Seconds after which the kubernetes job for the hook will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | monitor.authentication | object | {\"apiKeySecret\":null} | Configure authentication schema and credentials the persistence provider should use to connect to Azure Monitor | | monitor.authentication.apiKeySecret | string | nil | Link a pre-existing generic secret with workspace and sharedkey key / value pairs | | monitor.logtypePrefix | string | \"SCB\" | nan | Code of secureCodeBox is licensed under the Apache License 2.0." } ]
{ "category": "Provisioning", "file_name": "what-is-scribe.md", "project_name": "Scribe Security Platform", "subcategory": "Security & Compliance" }
[ { "data": "Scribe is a platform designed to ensure the security of software supply chains. It's aimed at aiding software production teams like DevSecOps, AppSec, and development teams within either software vendor or SaaS organizations, as well as software consumers like buyers, compliance officers, and security operators. Scribe assists these users in generating the necessary evidence and policies for managing risk within their supply chains. Through the use of Scribe, you can assure the security of your product, manage Software Bills of Materials (SBOMs), and adhere to various regulations and standards. These include the NIST's Secure Software Development Framework (SSDF) and the Supply-chain Levels for Software Artifacts (SLSA) standards. In simpler terms, Scribe helps you prove your software is safe, keep track of all the parts used in your software, and follow important rules about software safety. Scribe continuously gathers and examines evidence from the software development and build processes to confirm that the software was built securely. This includes validating code integrity, ensuring code reviews before building, performing security tests, verifying that only approved dependencies are used, and validating that commits are made exclusively by authorized developers. Essentially, Scribe acts as a hub where software producers and consumers can exchange this evidence, attesting to the safety of their software products. Both producers and consumers can manage their risk by applying Scribe's policies to the collected evidence. This means that Scribe helps in setting and enforcing rules about what kind of evidence is needed to show that the software is safe." } ]
{ "category": "Provisioning", "file_name": "doggo.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "CMSeeK is an open source penetration testing tool to automate the process of detecting various types of CMS and its installed extensions. Only the Joomla CMS is supported by secureCodeBox. CMSeeK has a database with known vulnerabilities. To learn more about the CMSeeK scanner itself, visit the CMSeeK GitHub repository here. The cmseek chart can be deployed via helm: ``` The CMSeeK targets are specified with the -u parameter. The target should be a URL. Additional CMSeeK scan features can be configured via the parameter attribute. Some useful example parameters listed below: Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:--|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-cmseek\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | |" }, { "data": "| int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-cmseek\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":false,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | false | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: ``` ```" } ]
{ "category": "Provisioning", "file_name": "amass.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "Kubeaudit finds security misconfigurations in you Kubernetes Resources and gives tips on how to resolve these. Kubeaudit comes with a large lists of \"auditors\" which test various aspects, like the SecurityContext of pods. You can find the complete list of auditors here. To learn more about the kubeaudit itself visit kubeaudit GitHub. The kubeaudit chart can be deployed via helm: ``` The following security scan configuration example are based on the [kube-hunter Documentation], please take a look at the original documentation for more configuration examples. Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:-|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | kubeauditScope | string | \"namespace\" | Automatically sets up rbac roles for kubeaudit to access the resources it scans. Can be either \"cluster\" (ClusterRole) or \"namespace\" (Role) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-kubeaudit\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | |" }, { "data": "| list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-kubeaudit\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: In this example we execute an kubeaudit scan against the intentional vulnerable juice-shop Before executing the scan, make sure to setup juice-shop ``` helm upgrade --install juice-shop secureCodeBox/juice-shop --wait``` After that you can execute the scan in this directory: ``` kubectl apply -f scan.yaml``` Make sure to install juice-shop in the same namespace as the scanner! If you juice-shop runs in, e.g., the kubeaudit-tests namespace, install the chart and run the scan there too ``` Also, you must adjust the namespace in the scan.yaml with the -n flag. Alternatively, you can set the scope of kubeaudit to cluster: ``` helm upgrade --install kubeaudit secureCodeBox/kubeaudit -n kubeaudit-tests --set=\"kubeauditScope=cluster\"``` ```" } ]
{ "category": "Provisioning", "file_name": "dependency-track.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "The Azure Monitor persistenceProvider hook saves all findings and reports into the configured Azure Monitor workspace using the Data Collector API. This allows working with the data in Azure Monitor or Microsoft Sentinel to configure alerting based on new findings. It will create a custom log type for every scantype titled SCB[scantypename]. Installing the Azure Monitor persistenceProvider hook will add a ReadOnly Hook to your namespace. The persistence-azure-monitor chart can be deployed via helm: ``` Kubernetes: >=v1.11.0-0 The hook requires the Azure Monitor Workspace ID and its Primary Key for authentication. For details on how to find them, see this page. Create a Kubernetes secret with these values using ``` Then, configure the hook to use this secret when installing it: ``` helm upgrade --install persistence-azure-monitor . --wait \\ --set=\"monitor.authentication.apiKeySecret=\"azure-monitor\"\"``` | Key | Type | Default | Description | |:|:-|:--|:-| | hook.affinity | object | {} | Optional affinity settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | hook.env | list | [] | Optional environment variables mapped into the hook (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | hook.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.extraVolumes | list | [] | Optional Volumes mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | hook.image.repository | string | \"docker.io/securecodebox/hook-persistence-azure-monitor\" | Hook image repository | | hook.image.tag | string | defaults to the charts version | Container image tag | | hook.labels | object | {} | Add Kubernetes Labels to the hook definition | | hook.priority | int | 0 | Hook priority. Higher priority Hooks are guaranteed to execute before low priority Hooks. | | hook.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the hook container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | hook.tolerations | list | [] | Optional tolerations settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | hook.ttlSecondsAfterFinished | string | nil | Seconds after which the kubernetes job for the hook will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | monitor.authentication | object | {\"apiKeySecret\":null} | Configure authentication schema and credentials the persistence provider should use to connect to Azure Monitor | | monitor.authentication.apiKeySecret | string | nil | Link a pre-existing generic secret with workspace and sharedkey key / value pairs | | monitor.logtypePrefix | string | \"SCB\" | nan | Code of secureCodeBox is licensed under the Apache License 2.0." } ]
{ "category": "Provisioning", "file_name": "generic-webhook.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "The Cascading Scans Hook can be used to orchestrate security scanners based on defined rule sets. The so called CascadingRules consist of a matches section which contains one or multiple rules which are compared against findings. When a finding matches a rule the scanSpec section will then be used to create a new scan. To customize the scan to match the finding, the mustache templating language can be used to reference fields of the finding. This Hook is based on the ADR https://www.securecodebox.io/docs/architecture/architecturedecisions/adr0003/ More information about how to use cascading scans can be found here: The cascading-scans chart can be deployed via helm: ``` Kubernetes: >=v1.11.0-0 Installing the Cascading Scans hook will add a ReadOnly Hook to your namespace which looks for matching CascadingRules in the namespace and start the according scans. ``` kubectl get ScanCompletionHooksNAME TYPE IMAGEdssh ReadOnly docker.io/securecodebox/hook-cascading-scans:latest``` The CascadingRules are included directly in each helm chart of the individual scanners. There is a configuration option cascadingRules.enabled for each scanner to prevent this inclusion. ``` When you start a normal Scan, no CascadingRule will be applied. To use a CascadingRule the scan must be marked to allow cascading rules. This is implemented using kubernetes label selectors, meaning that scans mark the classes of scans which are allowed to be cascaded by the current one. ``` cat <<EOF | kubectl apply -f -apiVersion: \"execution.securecodebox.io/v1\"kind: Scanmetadata: name: \"example.com\"spec: scanType: nmap parameters: - -p22,80,443 - example.com cascades: matchLabels: securecodebox.io/intensive: lightEOF``` This Scan will use all CascadingRules which are labeled with a \"light\" intensity. You can lookup which CascadingRules this selects by running: ``` kubectl get CascadingRules -l \"securecodebox.io/intensive=light\"NAME STARTS INVASIVENESS INTENSIVENESShttps-tls-scan sslyze non-invasive lightimaps-tls-scan sslyze non-invasive lightnmap-smb nmap non-invasive lightpop3s-tls-scan sslyze non-invasive lightsmtps-tls-scan sslyze non-invasive lightssh-scan ssh-scan non-invasive light``` The label selectors also allow the more powerful matchExpressions selectors: ``` cat <<EOF | kubectl apply -f -apiVersion: \"execution.securecodebox.io/v1\"kind: Scanmetadata: name: \"example.com\"spec: scanType: nmap parameters: - -p22,80,443 - example.com" }, { "data": "# Using matchExpressions instead of matchLabels matchExpressions: - key: \"securecodebox.io/intensive\" operator: In # This select both light and medium intensity rules values: [light, medium]EOF``` This selection can be replicated in kubectl using: ``` kubectl get CascadingRules -l \"securecodebox.io/intensive in (light,medium)\"NAME STARTS INVASIVENESS INTENSIVENESShttps-tls-scan sslyze non-invasive lightimaps-tls-scan sslyze non-invasive lightnikto-http nikto non-invasive mediumnmap-smb nmap non-invasive lightpop3s-tls-scan sslyze non-invasive lightsmtps-tls-scan sslyze non-invasive lightssh-scan ssh-scan non-invasive lightzap-http zap-baseline-scan non-invasive medium``` | Key | Type | Default | Description | |:--|:-|:--|:| | hook.affinity | object | {} | Optional affinity settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | hook.env | list | [] | Optional environment variables mapped into the hook (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | hook.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.extraVolumes | list | [] | Optional Volumes mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.image.repository | string | \"docker.io/securecodebox/hook-cascading-scans\" | Hook image repository | | hook.image.tag | string | defaults to the charts version | The image Tag defaults to the charts version if not defined. | | hook.labels | object | {} | Add Kubernetes Labels to the hook definition | | hook.priority | int | 0 | Hook priority. Higher priority Hooks are guaranteed to execute before low priority Hooks. | | hook.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the hook container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | hook.tolerations | list | [] | Optional tolerations settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | hook.ttlSecondsAfterFinished | string | nil | Seconds after which the kubernetes job for the hook will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | Code of secureCodeBox is licensed under the Apache License 2.0." } ]
{ "category": "Provisioning", "file_name": "notification-webhook.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "The DefectDojo hook imports the reports from scans automatically into OWASP DefectDojo. The hook uses the import scan API v2 from DefectDojo to import the scan results. Scan types which are both supported by the secureCodeBox and DefectDojo benefit from the full feature set of DefectDojo, like deduplication. These scan types are (see up-to-date list in Java source): After uploading the results to DefectDojo, it will use the findings parsed by DefectDojo to overwrite the original secureCodeBox findings identified by the parser. This lets you access the finding metadata like the false positive and duplicate status from DefectDojo in further ReadOnly hooks, e.g. send out Slack notification for non-duplicate & non-false positive findings only. This hook reads only from raw findings and not from secureCodeBox findings. Because DefectDojo does a way better job on parsing the findings itself, instead of parsing our secureCodeBox finding format with the generic scan type. If you want to modify a finding before it is imported into DefectDojo you can write a custom post-processing hook which operates on the raw findings. For scan types which are not supported by DefectDojo, the generic importer is used, which will result in a less sophisticated display of the results and fewer features inside DefectDojo. In the worst case, it can lead to some findings being lost - see the note below. A big amount of findings may require higher resource limits. Changing them may be required to avoid OOM errors. The default values are: ``` requests: { cpu: \"200m\", memory: \"100Mi\"},limits: { cpu: \"400m\", memory: \"200Mi\"}``` For example, to set the memory limit to 512Mi, we run the following command: ``` helm upgrade --namespace NAMESPACE --install persistence-defectdojo secureCodeBox/persistence-defectdojo --set=\"hook.resources.limits.memory=512Mi\"``` Be careful when using the DefectDojo Hook in combination with other ReadAndWrite Hooks. By default, the secureCodeBox makes no guarantees about the execution order of multiple ReadAndWrite hooks, they can be executed in any order. This can lead to \"lost update\" problems as the DefectDojo hook will overwrite all findings, which disregards the results of previously run ReadAndWrite hooks. ReadOnly hooks work fine with the DefectDojo hook as they are always executed after ReadAndWrite Hooks. If you want to control the order of execution of the different hooks, take a look at the hook priority documentation (supported with secureCodeBox 3.4.0 and later). The DefectDojo hook will send all scan results to DefectDojo, including those for which DefectDojo does not have native support. In this case, DefectDojo may incorrectly deduplicate findings, which can in some cases lead to incomplete imports and even data loss. You can set the hook to read-only mode, which will prevent it from writing the results back to secureCodeBox (--set defectdojo.syncFindingsBack=false during installation of the hook) if you want to rule out any data loss inside secureCodeBox, but this will not prevent the incorrect deduplication from affecting the data you see inside DefectDojo (for this, you will need to contribute a parser to DefectDojo). You can also selectively disable the DefectDojo hook for certain scans using the hook selector feature (supported with secureCodeBox 3.4.0 and" }, { "data": "For development purposes, it can be useful to run this hook locally. You can do so by following these steps: ``` ./gradlew buildjava -jar build/libs/defectdojo-persistenceprovider-<VERSION>>.jar https://gist.githubusercontent.com/.../scanme-nmap-org.xml https://gist.githubusercontent.com/.../nmap-findings.json https://httpbin.org/put https://httpbin.org/put``` The persistence-defectdojo chart can be deployed via helm: ``` Kubernetes: >=v1.11.0-0 Installing the DefectDojo persistenceProvider hook will add a ReadAndWrite Hook to your namespace. ``` kubectl create secret generic defectdojo-credentials --from-literal=\"username=admin\" --from-literal=\"apikey=08b7...\"helm upgrade --install dd secureCodeBox/persistence-defectdojo \\ --set=\"defectdojo.url=https://defectdojo-django.default.svc\"``` The hook will automatically import the scan results into an engagement in DefectDojo. If the engagement doesn't exist the hook will create the engagement (CI/CD engagement) and all objects required for it (product & product type). The hook will then pull the imported information from DefectDojo and use them to replace the findings inside secureCodeBox. In case you use a DefectDojo instance with a self-signed root CA, upgrade the hook with: ``` helm upgrade --install dd secureCodeBox/persistence-defectdojo \\ --set=\"defectdojo.url=https://defectdojo-django.default.svc\" \\ --set-json 'hook.extraVolumes=[{\"name\": \"ca-dojo\", \"configMap\": {\"name\": \"ca-dojo\"}}]' \\ --set-json 'hook.extraVolumeMounts=[{\"name\": \"ca-dojo\", \"mountPath\": \"/etc/ssl/certs/java/cacerts\", \"subPath\": \"cacerts\", \"readOnly\": false}]'``` After, you can update /etc/ssl/certs/java/cacerts with your certificate. You don't need any configuration for that to work, the hook will infer engagement & product names from the scan name. If you want more control over the names or add additional meta information like the version of the tested software you can add these via annotation to the scan. See examples below. | Scan Annotation | Description | Default if not set | Notes | |:--|:--|:|:-| | defectdojo.securecodebox.io/product-type-name | Name of the Product Type | Product Type with ID 1 (typically \"Research and Development\") | Product Type will be automatically created if no Product Type under that name exists. Product Type names need to be unique in the DefectDojo instance. | | defectdojo.securecodebox.io/product-name | Name of the Product | ScheduledScan Name if Scheduled, Scan Name if it's a standalone Scan | Product will be automatically created if no Product under that name exists. Product names need to be unique in the DefectDojo instance. | | defectdojo.securecodebox.io/product-description | Description of the Product | Empty String | Only used when creating the Product not used for updating | | defectdojo.securecodebox.io/product-tags | Product Tags | Nothing | Only used when creating the Product not used for updating | | defectdojo.securecodebox.io/engagement-name | Name of the Engagement | Scan Name | Will be automatically created if no engagement with that name and version exists | | defectdojo.securecodebox.io/engagement-version | Engagement Version | Nothing | nan | | defectdojo.securecodebox.io/engagement-deduplicate-on-engagement | Deduplicate On Engagement | false | Only used when creating the Engagement not used for updating | | defectdojo.securecodebox.io/engagement-tags | Engagement Tags | Nothing | Only used when creating the Engagement not used for updating | | defectdojo.securecodebox.io/test-title | Test Title | Scan Name | nan | | defectdojo.securecodebox.io/minimum_severity | Minimum severity for findings created in DD | Nothing | Used to only create finding in DD, which are of a certain severity | By default, the DefectDojo hook will pull the imported results from DefectDojo and use them to replace the results inside" }, { "data": "This allows you to benefit from DefectDojo's deduplication logic and only trigger follow-up scans or notifications for new findings. If you want to disable this feature, you can install the hook in read-only mode using --set defectdojo.syncFindingsBack=false while installing the hook using Helm. Attributes like if a finding has been marked as accepted or has been marked as a false positive in DefectDojo are only attached to the original finding. The duplicated findings will always have the falsePositive, riskAccepted and outOfScope attributes set to false as they have just been imported. To enable users to access this meta information on the original the syncFindingBack mode automatically embeds the orignal finding in the attributes of synced back duplicate findings. The following example shows a finding produced by the syncFindingBack mode, in which the original finding has been marked as accepted. ``` By default the DefectDojo Hook requires a API Token with platform wide \"Staff\" access rights. DefectDojo >2.0.0 refined their user access rights, allowing you to restrict the users access rights to only view specific product types in DefectDojo. The secureCodeBox DefectDojo Persistence Hook can be configured to run with such a token of a \"low privileged\" users by setting the defectdojo.lowPrivilegedMode=true. ``` kubectl create secret generic defectdojo-credentials --from-literal=\"apikey=08b7...\"helm upgrade --install dd secureCodeBox/persistence-defectdojo \\ --set=\"defectdojo.url=http://defectdojo-django.default.svc\" \\ --set=\"defectdojo.lowPrivilegedMode=true\" \\ --set=\"defectdojo.authentication.userId=42\"``` It has come to our attention, that DefectDojo become slow when handling a lot of data. A lot of data in DefectDojo can be informational findings one likes to ignore. Therefore DefectDojo provides the option to only create findings for scan finding from a certain severity level and above, thus lowering the amount of data stored. We integrate this option in our scans by providing the \"defectdojo.securecodebox.io/minimum_severity\" annotation for scans. This is an example of how the minimum severity for findings of a scan can be set: ``` apiVersion: \"execution.securecodebox.io/v1\"kind: ScheduledScanmetadata: name: \"zap-juiceshop\" annotations: defectdojo.securecodebox.io/minimum_severity: \"Low\"spec: interval: 24h scanSpec: scanType: \"zap-full-scan\" parameters: - \"-t\" - \"http://juice-shop.demo-targets.svc:3000\"``` In this example only for scan findings with a severity of \"Low\" or higher there are findings in DefectDojo created. This will run a daily scan using ZAP on a demo target. The results will be imported using the name \"zap-juiceshop-$UNIX_TIMESTAMP\" (Name of the Scan created by the ScheduledScan), in a product called \"zap-juiceshop\" in the default DefectDojo product type. ``` apiVersion: \"execution.securecodebox.io/v1\"kind: ScheduledScanmetadata: name: \"zap-juiceshop\"spec: interval: 24h scanSpec: scanType: \"zap-full-scan\" parameters: - \"-t\" - \"http://juice-shop.demo-targets.svc:3000\"``` This will import the results into engagement, product and product type following the labels. The engagement will be reused by the hook for the daily scans / imports until the engagement version is increased. ``` apiVersion: \"execution.securecodebox.io/v1\"kind: ScheduledScanmetadata: name: \"zap-full-scan-juiceshop\" annotations: defectdojo.securecodebox.io/product-type-name: \"OWASP\" defectdojo.securecodebox.io/product-name: \"Juice Shop\" defectdojo.securecodebox.io/product-description: | OWASP Juice Shop is probably the most modern and sophisticated insecure web application! It can be used in security trainings, awareness demos, CTFs and as a guinea pig for security tools! Juice Shop encompasses vulnerabilities from the entire OWASP Top Ten along with many other security flaws found in real-world applications! defectdojo.securecodebox.io/product-tags: vulnerable,appsec,owasp-top-ten,vulnapp defectdojo.securecodebox.io/engagement-name: \"Juice Shop\" defectdojo.securecodebox.io/engagement-version: \"v12.6.1\" defectdojo.securecodebox.io/engagement-tags: \"automated,daily\" defectdojo.securecodebox.io/engagement-deduplicate-on-engagement: \"true\" defectdojo.securecodebox.io/test-title: \"Juice Shop - v12.6.1\"spec: interval: 24h scanSpec: scanType: \"zap-full-scan\" parameters: - \"-t\" -" }, { "data": "| Key | Type | Default | Description | |:--|:-|:--|:--| | defectdojo.authentication.apiKeyKey | string | \"apikey\" | Name of the apikey key in the userSecret secret. Use this if you already have a secret with different key / value pairs | | defectdojo.authentication.userId | string | nil | Set the userId explicitly. When not set the configured username is used to look up the userId via the DefectDojo API (which is only available for privileged users.) | | defectdojo.authentication.userSecret | string | \"defectdojo-credentials\" | Link a pre-existing generic secret with username and apikey key / value pairs | | defectdojo.authentication.usernameKey | string | \"username\" | Name of the username key in the userSecret secret. Use this if you already have a secret with different key / value pairs | | defectdojo.lowPrivilegedMode | bool | false | Allows the hook to run with a users token whose access rights are restricted to one / multiple product types but doesn't have global platform rights. If set to true, the DefectDojo User ID has to be configured instead of the username (defectdojo.authentication.userId). User needs to have at least the Maintainer role in the used Product Type. | | defectdojo.syncFindingsBack | bool | true | Syncs back (two way sync) all imported findings from DefectDojo to SCB Findings Store. When set to false the hook will only import the findings to DefectDojo (one way sync). | | defectdojo.url | string | \"http://defectdojo-django.default.svc\" | Url to the DefectDojo Instance | | hook.affinity | object | {} | Optional affinity settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | hook.env | list | [] | Optional environment variables mapped into the hook (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | hook.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.extraVolumes | list | [] | Optional Volumes mapped into the hook (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | hook.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | hook.image.repository | string | \"docker.io/securecodebox/hook-persistence-defectdojo\" | Hook image repository | | hook.image.tag | string | defaults to the charts version | Container image tag | | hook.labels | object | {} | Add Kubernetes Labels to the hook definition | | hook.priority | int | 0 | Hook priority. Higher priority Hooks are guaranteed to execute before low priority Hooks. | | hook.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the hook container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | hook.tolerations | list | [] | Optional tolerations settings that control how the hook job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | hook.ttlSecondsAfterFinished | string | nil | Seconds after which the kubernetes job for the hook will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | Code of secureCodeBox is licensed under the Apache License 2.0." } ]
{ "category": "Provisioning", "file_name": "kube-hunter.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "Amass currently has a known issue where the enumeration sometimes does not exit correctly and keeps running indefinitely. This is why we recommend using the option -timeout MINUTES mitigate the issue. The scan will then exit after the specified amount of minutes, and the findings should be correctly parsed. The OWASP Amass Project has developed a tool to help information security professionals perform network mapping of attack surfaces and perform external asset discovery using open source information gathering and active reconnaissance techniques. To learn more about the Amass scanner itself visit OWASP Amass Project or Amass GitHub. The amass chart can be deployed via helm: ``` The following security scan configuration example are based on the Amass User Guide, please take a look at the original documentation for more configuration examples. Special command line options: Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:--|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-amass\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | |" }, { "data": "| object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [{\"mountPath\":\"/amass/output/config.ini\",\"name\":\"amass-config\",\"subPath\":\"config.ini\"}] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [{\"configMap\":{\"name\":\"amass-config\"},\"name\":\"amass-config\"}] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-amass\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":false,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | false | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: ``` ```" } ]
{ "category": "Provisioning", "file_name": "nuclei.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "Kubeaudit finds security misconfigurations in you Kubernetes Resources and gives tips on how to resolve these. Kubeaudit comes with a large lists of \"auditors\" which test various aspects, like the SecurityContext of pods. You can find the complete list of auditors here. To learn more about the kubeaudit itself visit kubeaudit GitHub. The kubeaudit chart can be deployed via helm: ``` The following security scan configuration example are based on the [kube-hunter Documentation], please take a look at the original documentation for more configuration examples. Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:-|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | kubeauditScope | string | \"namespace\" | Automatically sets up rbac roles for kubeaudit to access the resources it scans. Can be either \"cluster\" (ClusterRole) or \"namespace\" (Role) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-kubeaudit\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | |" }, { "data": "| list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-kubeaudit\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: In this example we execute an kubeaudit scan against the intentional vulnerable juice-shop Before executing the scan, make sure to setup juice-shop ``` helm upgrade --install juice-shop secureCodeBox/juice-shop --wait``` After that you can execute the scan in this directory: ``` kubectl apply -f scan.yaml``` Make sure to install juice-shop in the same namespace as the scanner! If you juice-shop runs in, e.g., the kubeaudit-tests namespace, install the chart and run the scan there too ``` Also, you must adjust the namespace in the scan.yaml with the -n flag. Alternatively, you can set the scope of kubeaudit to cluster: ``` helm upgrade --install kubeaudit secureCodeBox/kubeaudit -n kubeaudit-tests --set=\"kubeauditScope=cluster\"``` ```" } ]
{ "category": "Provisioning", "file_name": "trivy.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. You should NOT run kube-hunter on a Kubernetes cluster that you don't own! To learn more about the kube-hunter scanner itself visit kube-hunter GitHub or kube-hunter Website. The kube-hunter chart can be deployed via helm: ``` The following security scan configuration example are based on the kube-hunter Documentation, please take a look at the original documentation for more configuration examples. Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:--|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-kube-hunter\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | |" }, { "data": "| int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-kube-hunter\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":false} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | false | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: ``` ```" } ]
{ "category": "Provisioning", "file_name": "trivy-sbom.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "FFuf is an open source (MIT license) fuzzing tool to detect content and elements on webservers and web applications. People often use it as a web directory bruteforcer, but it is also capable of fuzzing much more than that (e.g. XSS, SQLi,...). To learn more about the ffuf scanner itself visit ffuf GitHub. With this scanner the secure code box also installs SecLists wordlists. The ffuf chart can be deployed via helm: ``` The mandatory parameters are -u and either -w or --input-cmd (normally -w is used): To define the test case for ffuf, use the keyword FUZZ anywhere in the URL (-u), headers (-H), or POST data (-d). Full argument description from ffuf: ``` -HTTP OPTIONS:-H Header `\"Name: Value\"`, separated by colon. Multiple -H flags are accepted.-X HTTP method to use-b Cookie data `\"NAME1=VALUE1; NAME2=VALUE2\"` for copy as curl functionality.-d POST data-ignore-body Do not fetch the response content. (default: false)-r Follow redirects (default: false)-recursion Scan recursively. Only FUZZ keyword is supported, and URL (-u) has to end in it. (default: false)-recursion-depth Maximum recursion depth. (default: 0)-recursion-strategy Recursion strategy: \"default\" for a redirect based, and \"greedy\" to recurse on all matches (default: default)-replay-proxy Replay matched requests using this proxy.-sni Target TLS SNI, does not support FUZZ keyword-timeout HTTP request timeout in seconds. (default: 10)-u Target URL-x Proxy URL (SOCKS5 or HTTP). For example: http://127.0.0.1:8080 or socks5://127.0.0.1:8080GENERAL OPTIONS:-V Show version information. (default: false)-ac Automatically calibrate filtering options (default: false)-acc Custom auto-calibration string. Can be used multiple times. Implies -ac-c Colorize output. (default: false)-config Load configuration from a file-maxtime Maximum running time in seconds for entire process. (default: 0)-maxtime-job Maximum running time in seconds per job. (default: 0)-noninteractive Disable the interactive console functionality (default: false)-p Seconds of `delay` between requests, or a range of random delay. For example \"0.1\" or \"0.1-2.0\"-rate Rate of requests per second (default: 0)-s Do not print additional information (silent mode) (default: false)-sa Stop on all error cases. Implies -sf and -se. (default: false)-se Stop on spurious errors (default: false)-sf Stop when > 95% of responses return 403 Forbidden (default: false)-t Number of concurrent threads. (default: 40)-v Verbose output, printing full URL and redirect location (if any) with the results. (default: false)MATCHER OPTIONS:-mc Match HTTP status codes, or \"all\" for everything. (default: 200,204,301,302,307,401,403,405,500)-ml Match amount of lines in response-mr Match regexp-ms Match HTTP response size-mt Match how many milliseconds to the first response byte, either greater or less than. EG: >100 or <100-mw Match amount of words in responseFILTER OPTIONS:-fc Filter HTTP status codes from response. Comma separated list of codes and ranges-fl Filter by amount of lines in response. Comma separated list of line counts and ranges-fr Filter regexp-fs Filter HTTP response size. Comma separated list of sizes and ranges-ft Filter by number of milliseconds to the first response byte, either greater or less than. EG: >100 or <100-fw Filter by amount of words in response. Comma separated list of word counts and rangesINPUT OPTIONS:-D DirSearch wordlist compatibility mode. Used in conjunction with -e flag. (default: false)-e Comma separated list of" }, { "data": "Extends FUZZ keyword.-ic Ignore wordlist comments (default: false)-input-cmd Command producing the input. --input-num is required when using this input method. Overrides -w.-input-num Number of inputs to test. Used in conjunction with --input-cmd. (default: 100)-input-shell Shell to be used for running command-mode Multi-wordlist operation mode. Available modes: clusterbomb, pitchfork, sniper (default: clusterbomb)-request File containing the raw http request-request-proto Protocol to use along with raw request (default: https)-w Wordlist file path and (optional) keyword separated by colon. eg. '/path/to/wordlist:KEYWORD'OUTPUT OPTIONS:-debug-log Write all of the internal logging to the specified file.-o Write output to file-od Directory path to store matched results to.-of Output file format. Available formats: json, ejson, html, md, csv, ecsv (or, 'all' for all formats) (default: json)-or Don't create the output file if we don't have results (default: false)EXAMPLE USAGE:Fuzz file paths from wordlist.txt, match all responses but filter out those with content-size 42.Colored, verbose output.ffuf -w wordlist.txt -u https://example.org/FUZZ -mc all -fs 42 -c -vFuzz Host-header, match HTTP 200 responses.ffuf -w hosts.txt -u https://example.org/ -H \"Host: FUZZ\" -mc 200Fuzz POST JSON data. Match all responses not containing text \"error\".ffuf -w entries.txt -u https://example.org/ -X POST -H \"Content-Type: application/json\" \\-d '{\"name\": \"FUZZ\", \"anotherkey\": \"anothervalue\"}' -fr \"error\"Fuzz multiple locations. Match only responses reflecting the value of \"VAL\" keyword. Colored.ffuf -w params.txt:PARAM -w values.txt:VAL -u https://example.org/?PARAM=VAL -mr \"VAL\" -cMore information and examples: https://github.com/ffuf/ffuf``` Kubernetes: >=v1.11.0-0 ffuf needs a wordlist file. To introduce your wordlist file to your scanner pod, you have to create a configMap: ``` kubectl create configmap --from-file /path/to/my/wordlist.txt ffuf-config``` Or you can use the secureCodeBox predefined (simple stupid) wordlist: ``` kubectl create configmap --from-file examples/wordlist-config-map/wordlist.txt ffuf-config``` If you are in a namespace: ``` kubectl create configmap --from-file examples/wordlist-config-map/wordlist.txt ffuf-config -n integration-tests``` Now just mount that config in your scan and select the mounted path for your ffuf -w option. | Key | Type | Default | Description | |:-|:-|:-|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-ffuf\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be" }, { "data": "Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-ffuf\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: ``` ``` ```" } ]
{ "category": "Provisioning", "file_name": "ssh.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "Screenshoter is a simple scanner that takes Screenshots of Websites. Therefore it's using an embeded Firefox Browser. The screenshooter chart can be deployed via helm: ``` You have to provide only the URL to the screenshooter. Be careful, the protocol is mandatory: Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:-|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-screenshooter\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration" }, { "data": "To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-screenshooter\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts version | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: This scan screenshots https://github.com/secureCodeBox ``` ```" } ]
{ "category": "Provisioning", "file_name": "typo3scan.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "Git-Repo-Scanner is a small Python script which discovers repositories on GitHub or GitLab. The main purpose of this scanner is to provide a cascading input for the gitleaks and semgrep scanners. The git-repo-scanner chart can be deployed via helm: ``` The scanner options can be divided into two groups for Gitlab and GitHub. You can choose the git repository type with the option: ``` --git-type githubor--git-type Gitlab``` For type GitHub you can use the following options: For now only organizations are supported, so the option is mandatory. We strongly recommend providing an access token for authentication, otherwise the API rate limiting will kick in after about 30 repositories scanned. For type GitLab you can use the following options: For Gitlab, the url and the access token is mandatory. If you don't provide a specific group id, all projects on the Gitlab server are going to be discovered. Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:-|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-git-repo-scanner\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | |" }, { "data": "| int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-git-repo-scanner\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts version | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: This example scans the organization secureCodeBox on github. Remember to add an access token to not encounter rate limiting: ``` ``` This example shows how to scan a specific group on a GitLab server. It also excludes certain subgroups and projects contained in this group: ```" } ]
{ "category": "Provisioning", "file_name": "wpscan.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "Typo3Scan is an open source penetration testing tool, that automates the process of detecting the Typo3 CMS version and its installed extensions. It also has a database with known vulnerabilities for core and extensions. The vulnerabilities corresponding to the version detected are presented as findings. To learn more about the Typo3Scan scanner itself, visit the Typo3Scan GitHub repository here. The typo3scan chart can be deployed via helm: ``` The Typo3Scan target is specified with the -d parameter. The target should be a url, hostname or an IP address. Please note that, the target url has to start with http:// or https:// when using a hostname or IP address as a target for the scan to work correctly. For example: http://localhost or https://123.45.67.890:80 Additional Typo3Scan scan features can be configured via the parameter attribute. Some useful example parameters listed below: Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:--|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-typo3scan\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | |" }, { "data": "| string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-typo3scan\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":false,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | false | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: ``` | | scanner.securityContext.runAsNonRoot | bool | false | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | trivyDatabaseCache.enabled | bool | true | Enables or disables the use of trivy server in another pod to cache the vulnerability database for all scans. | | trivyDatabaseCache.replicas | int | 1 | amount of replicas to configure for the Deployment | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: This example shows how to use the trivy filesystem scan with the secureCodeBox. You can use a nitContainer and also volumeMounts to point to the filesystem you want to scan. ``` This example shows how to use the trivy image scan with the secureCodeBox. ``` ``` This example shows how to use the trivy image scan with the secureCodeBox. ``` The findings are too large to display, you may download the file. This example shows how to use the trivy k8s scan with the secureCodeBox. ``` This example shows how to use the trivy repo scan with the secureCodeBox. ```" } ]
{ "category": "Provisioning", "file_name": "zap-advanced.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "Whatweb identifies the technologies used to build a website, such as content management systems (CMS), JavaScript libraries, web server and much more. It has multiple aggression levels and depending on this level, whatweb may also report the version of specific technologies when an appropiate aggeression level was given. This makes whatweb especially interesting to use in a cascading-rules pipeline. To learn more about the whatweb scanner itself visit https://morningstarsecurity.com/research/whatweb, GitHub repository or the Wiki on Github. The whatweb chart can be deployed via helm: ``` ``` WhatWeb - Next generation web scanner version 0.5.5.Developed by Andrew Horton (urbanadventurer) and Brendan Coles (bcoles)Homepage: https://morningstarsecurity.com/research/whatwebUsage: whatweb [options] <URLs>TARGET SELECTION: <TARGETs> Enter URLs, hostnames, IP addresses, filenames or IP ranges in CIDR, x.x.x-x, or x.x.x.x-x.x.x.x format. --input-file=FILE, -i Read targets from a file. You can pipe hostnames or URLs directly with -i /dev/stdin.TARGET MODIFICATION: --url-prefix Add a prefix to target URLs. --url-suffix Add a suffix to target URLs. --url-pattern Insert the targets into a URL. Requires --input-file, eg. www.example.com/%insert%/robots.txtAGGRESSION: The aggression level controls the trade-off between speed/stealth and reliability. --aggression, -a=LEVEL Set the aggression level. Default: 1. Aggression levels are: 1. Stealthy Makes one HTTP request per target. Also follows redirects. 3. Aggressive If a level 1 plugin is matched, additional requests will be made. 4. Heavy Makes a lot of HTTP requests per target. Aggressive tests from all plugins are used for all URLs.HTTP OPTIONS: --user-agent, -U=AGENT Identify as AGENT instead of WhatWeb/0.5.5. --header, -H Add an HTTP header. eg \"Foo:Bar\". Specifying a default header will replace it. Specifying an empty value, eg. \"User-Agent:\" will remove the header. --follow-redirect=WHEN Control when to follow redirects. WHEN may be `never`, `http-only`, `meta-only`, `same-site`, or `always`. Default: always. --max-redirects=NUM Maximum number of contiguous redirects. Default: 10.AUTHENTICATION: --user, -u=<user:password> HTTP basic authentication. --cookie, -c=COOKIES Provide cookies, e.g. `name=value; name2=value2`. --cookiejar=FILE Read cookies from a file.PROXY: --proxy <hostname[:port]> Set proxy hostname and port. Default: 8080. --proxy-user <username:password> Set proxy user and password.PLUGINS: --list-plugins, -l List all plugins. --info-plugins, -I=[SEARCH] List all plugins with detailed information. Optionally search with keywords in a comma delimited list. --search-plugins=STRING Search plugins for a keyword. --plugins, -p=LIST Select plugins. LIST is a comma delimited set of selected plugins. Default is all. Each element can be a directory, file or plugin name and can optionally have a modifier, eg. + or - Examples: +/tmp/moo.rb,+/tmp/foo.rb title,md5,+./plugins-disabled/ ./plugins-disabled,-md5 -p + is a shortcut for -p +plugins-disabled. --grep, -g=STRING|REGEXP Search for STRING or a Regular Expression. Shows only the results that match. Examples: --grep \"hello\" --grep \"/he[l]o/\" --custom-plugin=DEFINITION\\tDefine a custom plugin named Custom-Plugin, --custom-plugin=DEFINITION Define a custom plugin named Custom-Plugin, Examples: \":text=>'powered by abc'\" \":version=>/powered[ ]?by ab[0-9]/\" \":ghdb=>'intitle:abc \\\"powered by abc\\\"'\" \":md5=>'8666257030b94d3bdb46e05945f60b42'\" --dorks=PLUGIN List Google dorks for the selected plugin.OUTPUT: --verbose, -v Verbose output includes plugin descriptions. Use twice for debugging. --colour,--color=WHEN control whether colour is used. WHEN may be `never`, `always`, or `auto`. --quiet, -q Do not display brief logging to STDOUT. --no-errors Suppress error messages.LOGGING: --log-brief=FILE Log brief, one-line output. --log-verbose=FILE Log verbose" }, { "data": "--log-errors=FILE Log errors. --log-xml=FILE Log XML format. --log-json=FILE Log JSON format. --log-sql=FILE Log SQL INSERT statements. --log-sql-create=FILE Create SQL database tables. --log-json-verbose=FILE Log JSON Verbose format. --log-magictree=FILE Log MagicTree XML format. --log-object=FILE Log Ruby object inspection format. --log-mongo-database Name of the MongoDB database. --log-mongo-collection Name of the MongoDB collection. Default: whatweb. --log-mongo-host MongoDB hostname or IP address. Default: 0.0.0.0. --log-mongo-username MongoDB username. Default: nil. --log-mongo-password MongoDB password. Default: nil. --log-elastic-index Name of the index to store results. Default: whatweb --log-elastic-host Host:port of the elastic http interface. Default: 127.0.0.1:9200 PERFORMANCE & STABILITY: --max-threads, -t Number of simultaneous threads. Default: 25. --open-timeout Time in seconds. Default: 15. --read-timeout Time in seconds. Default: 30. --wait=SECONDS Wait SECONDS between connections. This is useful when using a single thread.HELP & MISCELLANEOUS: --short-help Short usage help. --help, -h Complete usage help. --debug Raise errors in plugins. --version Display version information. (WhatWeb 0.5.5).EXAMPLE USAGE: Scan example.com. ./whatweb example.com Scan reddit.com slashdot.org with verbose plugin descriptions. ./whatweb -v reddit.com slashdot.org An aggressive scan of wired.com detects the exact version of WordPress. ./whatweb -a 3 www.wired.com Scan the local network quickly and suppress errors. whatweb --no-errors 192.168.0.0/24 Scan the local network for https websites. whatweb --no-errors --url-prefix https:// 192.168.0.0/24* Scan for crossdomain policies in the Alexa Top 1000. ./whatweb -i plugin-development/alexa-top-100.txt \\ --url-suffix /crossdomain.xml -p crossdomain_xml``` Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:-|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-whatweb\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | |" }, { "data": "| string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-whatweb\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: ``` ```" } ]
{ "category": "Provisioning", "file_name": "whatweb.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "WPScan is a free, for non-commercial use, black box WordPress vulnerability scanner written for security professionals and blog maintainers to test the security of their sites. NOTE: You need to provide WPSan with an API Token so that it can look up vulnerabilities infos with https://wpscan.com. Without the token WPScan will only identify WordPress Core / Plugin / Theme versions but not if they are actually vulnerable. You can get a free API Token at by registering for an account at https://wpscan.com. Using the secureCodeBox WPScans you can specify the token via the WPVULNDBAPITOKEN target attribute, see the example below. To learn more about the WPScan scanner itself visit wpscan.org. The wpscan chart can be deployed via helm: ``` The following security scan configuration example are based on the WPScan Documentation, please take a look at the original documentation for more configuration examples. ``` Available Choices: vp | Vulnerable plugins ap | All plugins p | Plugins vt | Vulnerable themes at | All themes t | Themes tt | Timthumbs cb | Config backups dbe | Db exports u | User IDs range. e.g: u1-5 Range separator to use: '-' Value if no argument supplied: 1-10 m | Media IDs range. e.g m1-15 Note: Permalink setting must be set to \"Plain\" for those to be detected Range separator to use: '-' Value if no argument supplied: 1-100Separator to use between the values: ','Default: All Plugins, Config BackupsValue if no argument supplied: vp,vt,tt,cb,dbe,u,mIncompatible choices (only one of each group/s can be used): - vp, ap, p - vt, at, t``` Kubernetes: >=v1.11.0-0 | Key | Type | Default | Description | |:-|:-|:|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-wpscan\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | |" }, { "data": "| object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"docker.io/securecodebox/scanner-wpscan\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":false,\"runAsNonRoot\":false} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | false | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | false | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: This example is only meant as a guideline for configuring wpscan. The example scan will fail as it uses example.com as a target, which isn't a valid wpscan target. ``` For this example to work, you must add a valid API Token to the scan.yaml! In this example we execute an wpscan scan against an old wordpress 4.0 instance old-wordpress Before executing the scan, make sure to setup old-wordpress ``` helm upgrade --install old-wordpress secureCodeBox/old-wordpress --wait``` Then, add an API Key in scan.yaml by replacing the TODO after the --api-token flag. After that you can execute the scan in this directory: ``` kubectl apply -f scan.yaml``` ```" } ]
{ "category": "Provisioning", "file_name": "v0.57.1.md", "project_name": "Sonobuoy", "subcategory": "Security & Compliance" }
[ { "data": "Sonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of plugins (including Kubernetes conformance tests) in an accessible and non-destructive manner. It is a customizable, extendable, and cluster-agnostic way to generate clear, informative reports about your cluster. Its selective data dumps of Kubernetes resource objects and cluster nodes allow for the following use cases: Starting v0.20, Sonobuoy supports Kubernetes v1.17 or later. Sonobuoy releases will be independent of Kubernetes release, while ensuring that new releases continue to work functionally across different versions of Kubernetes. Read more about the new release cycles in our blog. Note: You can skip this version enforcement by running Sonobuoy with the --skip-preflight flag. Access to an up-and-running Kubernetes cluster. If you do not have a cluster, we recommend either: An admin kubeconfig file, and the KUBECONFIG environment variable set. For some advanced workflows it may be required to have kubectl installed. See installing via Homebrew (MacOS) or building the binary (Linux). The sonobuoy images subcommand requires Docker to be installed. See installing Docker. The following methods exist for installing Sonobuoy: Download the latest release for your client platform. Extract the tarball: ``` tar -xvf <RELEASETARBALLNAME>.tar.gz ``` Move the extracted sonobuoy executable to somewhere on your PATH. Run the command: ``` brew install sonobuoy ``` To launch conformance tests (ensuring CNCF conformance) and wait until they are finished run: ``` sonobuoy run --wait ``` Note: Using --mode quick will significantly shorten the runtime of Sonobuoy. It runs just a single test, helping to quickly validate your Sonobuoy and Kubernetes configuration. Get the results from the plugins (e.g. e2e test results): ``` results=$(sonobuoy retrieve) ``` Inspect results for test failures. This will list the number of tests failed and their names: ``` sonobuoy results $results ``` Note: The results command has lots of useful options for various situations. See the results page for more details. You can also extract the entire contents of the file to get much more detailed data about your cluster. Sonobuoy creates a few resources in order to run and expects to run within its own namespace. Deleting Sonobuoy entails removing its namespace as well as a few cluster scoped resources. ``` sonobuoy delete --wait ``` Note: The wait option ensures the Kubernetes namespace is deleted, avoiding conflicts if another Sonobuoy run is started quickly. If you have an issue with permissions in your cluster but you still want to run Sonobuoy, you can use --aggregator-permissions flag. Read more details about it here. By default, sonobuoy run runs the Kubernetes conformance tests but this can easily be configured. The same plugin that has the conformance tests has all the Kubernetes end-to-end tests which include other tests such as: To modify which tests you want to run, checkout our page on the e2e plugin. If you want to run other tests or tools which are not a part of the Kubernetes end-to-end suite, refer to our documentation on custom" }, { "data": "You can check on the status of each of the plugins running with: ``` sonobuoy status ``` You can also inspect the logs of all Sonobuoy containers: ``` sonobuoy logs ``` If you encounter any problems that the documentation does not address, file an issue. This year, Docker has started rate limiting image pulls from Docker Hub. Were planning a future release with a better user interface to work around this. Until then, this is the recommended approach. Sonobuoy by default pulls from Docker Hub for sonobuoy/sonobuoy image. If youre encountering rate limit on this, you can use VMware-provided mirror with: ``` sonobuoy run --sonobuoy-image projects.registry.vmware.com/sonobuoy/sonobuoy:<VERSION> ``` Kubernetes end-to-end conformance test pulls several images from Docker Hub as part of testing. To override this, you will need to create a registry manifest file locally (e.g. conformance-image-config.yaml) containing the following: ``` dockerLibraryRegistry: mirror.gcr.io/library ``` Then on running conformance: ``` sonobuoy run --sonobuoy-image projects.registry.vmware.com/sonobuoy/sonobuoy:<VERSION> --e2e-repo-config conformance-image-config.yaml ``` Technically dockerGluster is also a registry pulling from Docker Hub, but its not part of Conformance test suite at the moment, so overriding dockerLibraryRegistry should be enough. There are some Kubernetes e2e tests that may leak resources. Sonobuoy can help clean those up as well by deleting all namespaces prefixed with e2e: ``` sonobuoy delete --all ``` Sonobuoy requires admin permissions which wont be automatic if you are running via Google Kubernetes Engine (GKE) cluster. You must first create an admin role for the user under which you run Sonobuoy: ``` kubectl create clusterrolebinding <your-user-cluster-admin-binding> --clusterrole=cluster-admin --user=<your.google.cloud.email@example.org> ``` We dont recommend running via a cluster set up via Docker Desktop. Known issues include: Most of these issues revolve around issues with kube-proxy on Docker Desktop so if you know of how to resolve these issues, let us know. These versions of Sonobuoy have a bug that runs the wrong set of tests without additional actions. See more details here. The simplest way to avoid this is to update your version of Sonobuoy to >= v0.53.2. See our current strategy document and roadmap for context on what our highest priority use cases and work items will be. Feel free to make comments on Github or start conversations in Slack. Thanks for taking the time to join our community and start contributing! We welcome pull requests. Feel free to dig through the issues and jump in. The most common build/test functions are called via the Makefile: ``` // Build the binary $ make build // Run local unit tests $ make test ``` If you make changes which change output, you may fail tests which utilize the golden file testing pattern (e.g. correct data is stored in external files), update them by running: ``` $ make golden ``` In most cases, running integration tests is more simply done in CI when you open a pull request. You can dig into scripts/build_funcs.sh and our .github/workflows/ci-test.yaml for exact details of existing test flows. See the list of releases to find out about feature changes. To help you get started, see the documentation. 2023 Authors. A VMware-backed project. This Website Does Not Use Cookies or Other Tracking Technology" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "SOPS", "subcategory": "Security & Compliance" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:--|:--|:--|-:|-:| | Latest commitHistory12 Commits | Latest commitHistory12 Commits | Latest commitHistory12 Commits | nan | nan | | assets | assets | assets | nan | nan | | content | content | content | nan | nan | | layouts | layouts | layouts | nan | nan | | static/favicons | static/favicons | static/favicons | nan | nan | | .gitignore | .gitignore | .gitignore | nan | nan | | CONTRIBUTING.md | CONTRIBUTING.md | CONTRIBUTING.md | nan | nan | | Dockerfile | Dockerfile | Dockerfile | nan | nan | | LICENSE | LICENSE | LICENSE | nan | nan | | LICENSE-CODE | LICENSE-CODE | LICENSE-CODE | nan | nan | | README.md | README.md | README.md | nan | nan | | config.yaml | config.yaml | config.yaml | nan | nan | | docker-compose.yaml | docker-compose.yaml | docker-compose.yaml | nan | nan | | docsy.work | docsy.work | docsy.work | nan | nan | | docsy.work.sum | docsy.work.sum | docsy.work.sum | nan | nan | | go.mod | go.mod | go.mod | nan | nan | | go.sum | go.sum | go.sum | nan | nan | | hugo.toml | hugo.toml | hugo.toml | nan | nan | | netlify.toml | netlify.toml | netlify.toml | nan | nan | | package.json | package.json | package.json | nan | nan | | View all files | View all files | View all files | nan | nan | Docsy is a Hugo theme module for technical documentation sites, providing easy site navigation, structure, and more. This Docsy Example Project uses the Docsy theme component as a hugo module and provides a skeleton documentation structure for you to use. You can clone/copy this project and edit it with your own content, or use it as an example. In this project, the Docsy theme is pulled in as a Hugo module, together with its dependencies: ``` $ hugo mod graph ...``` For Docsy documentation, see Docsy user guide. This Docsy Example Project is hosted on Netlify at example.docsy.dev. You can view deploy logs from the deploy section of the project's Netlify dashboard, or this alternate dashboard. This is not an officially supported Google product. This project is currently maintained. A simple way to get started is to use this project as a template, which gives you a site project that is set up and ready to use. To do this: Use the dropdown for switching branches/tags to change to the latest released tag. Click Use this template. Select a name for your new project and click Create repository from template. Make your own local working copy of your new repo using git clone, replacing https://github.com/me/example.git with your repos web URL: ``` git clone --depth 1 https://github.com/me/example.git``` You can now edit your own versions of the sites source files. If you want to do SCSS edits and want to publish these, you need to install PostCSS ``` npm install``` Building and running the site locally requires a recent extended version of Hugo. You can find out more about how to install Hugo for your environment in our Getting started" }, { "data": "Once you've made your working copy of the site repo, from the repo root folder, run: ``` hugo server``` You can run docsy-example inside a Docker container, the container runs with a volume bound to the docsy-example folder. This approach doesn't require you to install any dependencies other than Docker Desktop on Windows and Mac, and Docker Compose on Linux. Build the docker image ``` docker-compose build``` Run the built image ``` docker-compose up``` NOTE: You can run both commands at once with docker-compose up --build. Verify that the service is working. Open your web browser and type http://localhost:1313 in your navigation bar, This opens a local instance of the docsy-example homepage. You can now make changes to the docsy example and those changes will immediately show up in your browser after you save. To stop Docker Compose, on your terminal window, press Ctrl + C. To remove the produced images run: ``` docker-compose rm``` For more information see the Docker Compose documentation. Make sure your installed go version is 1.18 or higher. Clone the latest version of the docsy theme into the parent folder of your project. The newly created repo should now reside in a sibling folder of your site's root folder. ``` cd root-of-your-site git clone --branch v0.7.2 https://github.com/google/docsy.git ../docsy``` Now run: ``` HUGOMODULEWORKSPACE=docsy.work hugo server --ignoreVendorPaths \"\"``` or, when using npm, prepend local to the script you want to invoke, e.g.: ``` npm run local serve``` By using the HUGOMODULEWORKSPACE directive (either directly or via prefix local when using npm), the server now watches all files and directories inside the sibling directory ../docsy , too. Any changes inside the local docsy theme clone are now immediately picked up (hot reload), you can instantly see the effect of your local edits. In the command above, we used the environment variable HUGOMODULEWORKSPACE to tell hugo about the local workspace file docsy.work. Alternatively, you can declare the workspace file inside your settings file hugo.toml: ``` [module] workspace = \"docsy.work\"``` Your project's hugo.toml file already contains these lines, the directive for workspace assignment is commented out, however. Remove the two trailing comment characters '//' so that this line takes effect. As you run the website locally, you may run into the following error: ``` $ hugo server WARN 2023/06/27 16:59:06 Module \"project\" is not compatible with this Hugo version; run \"hugo mod graph\" for more information. Start building sites hugo v0.101.0-466fa43c16709b4483689930a4f9ac8add5c9f66+extended windows/amd64 BuildDate=2022-06-16T07:09:16Z VendorInfo=gohugoio Error: Error building site: \"C:\\Users\\foo\\path\\to\\docsy-example\\content\\en\\_index.md:5:1\": failed to extract shortcode: template for shortcode \"blocks/cover\" not found Built in 27 ms``` This error occurs if you are running an outdated version of Hugo. As of docsy theme version v0.7.0, hugo version 0.110.0 or higher is required. See this section of the user guide for instructions on how to install Hugo. Or you may be confronted with the following error: ``` $ hugo server INFO 2021/01/21 21:07:55 Using config file: Building sites INFO 2021/01/21 21:07:55 syncing static files to / Built in 288 ms Error: Error building site: TOCSS: failed to transform \"scss/main.scss\" (text/x-scss): resource \"scss/scss/main.scss_9fadf33d895a46083cdd64396b57ef68\" not found in file cache``` This error occurs if you have not installed the extended version of Hugo. See this section of the user guide for instructions on how to install Hugo. Or you may encounter the following error: ``` $ hugo server Error: failed to download modules: binary with name \"go\" not found``` This error occurs if you have not installed the go programming language on your system. See this section of the user guide for instructions on how to install go." } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "SOPS", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "zap.md", "project_name": "secureCodeBox", "subcategory": "Security & Compliance" }
[ { "data": "The Zed Attack Proxy (ZAP) is one of the worlds most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. It's also a great tool for experienced pentesters to use for manual security testing. To learn more about the ZAP scanner itself visit https://www.zaproxy.org/. To learn more about the ZAP Automation Framework itself visit https://www.zaproxy.org/docs/desktop/addons/automation-framework/. The zap chart can be deployed via helm: ``` The following security scan configuration example are based on the ZAP Docker Scan Scripts. By default, the secureCodeBox ZAP Helm Chart installs all four ZAP scripts: zap-baseline, zap-full-scan , zap-api-scan & zap-automation-scan. Listed below are the arguments supported by the zap-baseline script, which are mostly interchangeable with the other ZAP scripts (except for zap-automation-scan). For a more complete reference check out the ZAP Documentation and the secureCodeBox based ZAP examples listed below. The command line interface can be used to easily run server scans: -t www.example.com ``` Usage: zap-baseline.py -t <target> [options] -t target target URL including the protocol, eg https://www.example.comOptions: -h print this help message -c configfile config file to use to INFO, IGNORE or FAIL warnings -u configurl URL of config file to use to INFO, IGNORE or FAIL warnings -g genfile generate default config file (all rules set to WARN) -m mins the number of minutes to spider for (default 1) -r reporthtml file to write the full ZAP HTML report -w reportmd file to write the full ZAP Wiki (Markdown) report -x reportxml file to write the full ZAP XML report -J reportjson file to write the full ZAP JSON document -a include the alpha passive scan rules as well -d show debug messages -P specify listen port -D delay in seconds to wait for passive scanning -i default rules not in the config file to INFO -I do not return failure on warning -j use the Ajax spider in addition to the traditional one -l level minimum level to show: PASS, IGNORE, INFO, WARN or FAIL, use with -s to hide example URLs -n contextfile context file which will be loaded prior to spidering the target -p progressfile progress file which specifies issues that are being addressed -s short output format - dont show PASSes or example URLs -T max time in minutes to wait for ZAP to start and the passive scan to run -z zapoptions ZAP command line options e.g. -z \"-config aaa=bbb -config ccc=ddd\" --hook path to python file that define your custom hooks``` The Automation Framework allows for higher flexibility in configuring ZAP scans. Its goal is the automation of the full functionality of ZAP's GUI. The configuration of the Automation Framework differs from the other three ZAP scan types. The following security scan configuration example highlights the differences for running a zap-automation-scan. Of particular interest for us will be the -autorun option. zap-automation-scan allows for providing an automation file as a ConfigMap that defines the details of the scan. See the secureCodeBox based ZAP Automation example listed below for what such a ConfigMap would look like. ``` Usage: zap.sh -cmd -host <target> [options] -t target target URL including the protocol, eg" }, { "data": "options: -script <script> Run the specified script from commandline or load in GUI -addoninstall <addOnId> Installs the add-on with specified ID from the ZAP Marketplace -addoninstallall Install all available add-ons from the ZAP Marketplace -addonuninstall <addOnId> Uninstalls the Add-on with specified ID -addonupdate Update all changed add-ons from the ZAP Marketplace -addonlist List all of the installed add-ons -certload <path> Loads the Root CA certificate from the specified file name -certpubdump <path> Dumps the Root CA public certificate into the specified file name, this is suitable for importing into browsers -certfulldump <path> Dumps the Root CA full certificate (including the private key) into the specified file name, this is suitable for importing into ZAP -notel Turns off telemetry calls -hud Launches a browser configured to proxy through ZAP with the HUD enabled, for use in daemon mode -hudurl <url> Launches a browser as per the -hud option with the specified URL -hudbrowser <browser> Launches a browser as per the -hud option with the specified browser, supported options: Chrome, Firefox by default 'Firefox' -openapifile <path> Imports an OpenAPI definition from the specified file name -openapiurl <url> Imports an OpenAPI definition from the specified URL -openapitargeturl <url> The Target URL, to override the server URL present in the OpenAPI definition. Refer to the help for supported format. -quickurl <target url> The URL to attack, e.g. http://www.example.com -quickout <filename> The file to write the HTML/JSON/MD/XML results to (based on the file extension) -autorun <filename> Run the automation jobs specified in the file. -autogenmin <filename> Generate template automation file with the key parameters. -autogenmax <filename> Generate template automation file with all parameters. -autogenconf <filename> Generate template automation file using the current configuration. -graphqlfile <path> Imports a GraphQL Schema from a File -graphqlurl <url> Imports a GraphQL Schema from a URL -graphqlendurl <url> Sets the Endpoint URL``` Kubernetes: >=v1.11.0-0 The secureCodeBox provides two different scanner charts (zap, zap-advanced) to automate ZAP WebApplication security scans. The first one zap comes with four scanTypes: The scanTypes zap-baseline-scan, zap-full-scan & zap-api-scan can be configured via CLI arguments which are somehow a bit limited for some advanced usecases, e.g. using custom zap scripts or configuring complex authentication settings. That's why we introduced this zap-advanced scanner chart, which introduces extensive YAML configuration options for ZAP. The YAML configuration can be split in multiple files and will be merged at start. ZAP's own Automation Framework provides similar functionality to the zap-advanced scanner chart and is set to displace it in the future. The ZAP Automation Scanner supports the use of secrets, as to not have hardcoded credentials in the scan definition. Generate secrets using the credentials that will later be used in the scan for authentication. Supported authentication methods for the ZAP Authentication scanner are Manual, HTTP / NTLM, Form-based, JSON-based, and Script-based. ``` kubectl create secret generic unamesecret --from-literal='username=<USERNAME>'kubectl create secret generic pwordsecret --from-literal='password=<PASSWORD>'``` You can now include the secrets in the scan definition and reference them in the ConfigMap that defines the scan options. A ZAP Automation scan using JSON-based authentication may look like this: ``` apiVersion: v1kind: ConfigMapmetadata: name: \"zap-automation-scan-config\"data: 1-automation.yaml: |- env: # The environment, mandatory contexts: # List of 1 or more contexts, mandatory - name: test-config # Name to be used to refer to this context in other jobs, mandatory urls: [\"http://juiceshop.demo-targets.svc:3000\"] # A mandatory list of top level urls, everything under each url will be included includePaths: - \"http://juiceshop.demo-targets.svc:3000/.\" # An optional list of regexes to include excludePaths: - \".socket\\\\.io.\" - \".\\\\.png\" - \".\\\\.jpeg\" - \".\\\\.jpg\" - \".\\\\.woff\" - \".\\\\.woff2\" - \".\\\\.ttf\" - \".\\\\.ico\" authentication: method: \"json\" parameters: loginPageUrl: \"http://juiceshop.demo-targets.svc:3000/rest/user\" loginRequestUrl: \"http://juiceshop.demo-targets.svc:3000/rest/user/login\" loginRequestBody: '{\"email\":\"${EMAIL}\",\"password\":\"${PASS}\"}' verification: method: \"response\" loggedOutRegex: '\\Q{\"user\":{}}\\E' loggedInRegex: '\\Q<a href=\"password.jsp\">\\E' users: - name: \"juiceshop-user-1\" credentials: username: \"${EMAIL}\" password: \"${PASS}\" parameters: failOnError: true # If set exit on an error failOnWarning:" }, { "data": "# If set exit on a warning progressToStdout: true # If set will write job progress to stdout jobs: - type: passiveScan-config # Passive scan configuration parameters: maxAlertsPerRule: 10 # Int: Maximum number of alerts to raise per rule scanOnlyInScope: true # Bool: Only scan URLs in scope (recommended) - type: spider # The traditional spider - fast but doesnt handle modern apps so well parameters: context: test-config # String: Name of the context to spider, default: first context user: juiceshop-user-1 # String: An optional user to use for authentication, must be defined in the env maxDuration: 2 # Int: The max time in minutes the spider will be allowed to run for, default: 0 unlimited - type: spiderAjax # The ajax spider - slower than the spider but handles modern apps well parameters: context: test-config # String: Name of the context to spider, default: first context maxDuration: 2 # Int: The max time in minutes the ajax spider will be allowed to run for, default: 0 unlimited - type: passiveScan-wait # Passive scan wait for the passive scanner to finish parameters: maxDuration: 10 # Int: The max time to wait for the passive scanner, default: 0 unlimited - type: report # Report generation parameters: template: traditional-xml # String: The template id, default : modern reportDir: /home/securecodebox/ # String: The directory into which the report will be written reportFile: zap-results # String: The report file name pattern, default: [[yyyy-MM-dd]]-ZAP-Report-[[site]] risks: # List: The risks to include in this report, default all - high - medium - lowapiVersion: \"execution.securecodebox.io/v1\"kind: Scanmetadata: name: \"zap-example-scan\"spec: scanType: \"zap-automation-scan\" parameters: - \"-autorun\" - \"/home/securecodebox/scb-automation/1-automation.yaml\" volumeMounts: - mountPath: /home/securecodebox/scb-automation/1-automation.yaml name: zap-automation subPath: 1-automation.yaml volumes: - name: zap-automation configMap: name: zap-automation-scan-config env: - name: EMAIL valueFrom: secretKeyRef: name: unamesecret key: username - name: PASS valueFrom: secretKeyRef: name: pwordsecret key: password``` For a complete overview of all the possible options you have for configuring a ZAP Automation scan, run ``` ./zap.sh -cmd -autogenmax zap.yaml``` For an overview of all required configuration options, run ``` bash ./zap.sh -cmd -autogenmin zap.yaml``` Alternatively, have a look at the official documentation. | Key | Type | Default | Description | |:-|:-|:|:| | cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner | | imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | | parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | parser.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | parser.image.repository | string | \"docker.io/securecodebox/parser-zap\" | Parser image repository | | parser.image.tag | string | defaults to the charts version | Parser image tag | | parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | parser.resources | object | { requests: { cpu: \"200m\", memory: \"100Mi\" }, limits: { cpu: \"400m\", memory: \"200Mi\" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. | | parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | |" }, { "data": "| string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | | scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) | | scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) | | scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) | | scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) | | scanner.envFrom | list | [] | Optional mount environment variables from configMaps or secrets (see: https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#configure-all-key-value-pairs-in-a-secret-as-container-environment-variables) | | scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) | | scanner.extraVolumeMounts | list | [{\"mountPath\":\"/zap/wrk\",\"name\":\"zap-workdir\"}] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.extraVolumes | list | [{\"emptyDir\":{},\"name\":\"zap-workdir\"}] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) | | scanner.image.pullPolicy | string | \"IfNotPresent\" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images | | scanner.image.repository | string | \"softwaresecurityproject/zap-stable\" | Container Image to run the scan | | scanner.image.tag | string | nil | defaults to the charts appVersion | | scanner.nameAppend | string | nil | append a string to the default scantype name. | | scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) | | scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) | | scanner.securityContext | object | {\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"all\"]},\"privileged\":false,\"readOnlyRootFilesystem\":false,\"runAsNonRoot\":false} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated | | scanner.securityContext.capabilities.drop[0] | string | \"all\" | This drops all linux privileges from the container. | | scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode | | scanner.securityContext.readOnlyRootFilesystem | bool | false | Prevents write access to the containers file system | | scanner.securityContext.runAsNonRoot | bool | false | Enforces that the scanner image is run as a non root user | | scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue | | scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) | | scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ | Code of secureCodeBox is licensed under the Apache License 2.0. The scanner is currently supported for these CPU architectures: ``` ``` ``` ``` ``` The findings are too large to display, you may download the file. ``` The findings are too large to display, you may download the file. ``` ``` ``` ```" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "SOPS", "subcategory": "Security & Compliance" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "SOPS", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "verifying-or-approving-a-domain-for-your-organization.md", "project_name": "SOPS", "subcategory": "Security & Compliance" }
[ { "data": "You can verify your ownership of domains with GitHub to confirm your organization's identity. Organization owners can verify or approve a domain for an organization. After verifying ownership of your organization's domains, a \"Verified\" badge will display on the organization's profile. To display a \"Verified\" badge, the website and email information shown on an organization's profile must match the verified domain or domains. If the website and email address shown on your organization's profile are hosted on different domains, you must verify both domains. If the website and email address use variants of the same domain, you must verify both variants. For example, if the profile shows the website www.example.com and the email address info@example.com, you would need to verify both www.example.com and example.com. If you confirm your organizations identity by verifying your domain and restricting email notifications to only verified email domains, you can help prevent sensitive information from being exposed. For more information see \"Best practices for preventing data leaks in your organization.\" To verify a domain, you must have access to modify domain records with your domain hosting service. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. Next to \"Verified & approved domains for your enterprise account\", click Add a domain. Under \"What domain would you like to add?\", type the domain you'd like to verify, then click Add domain. Follow the instructions under \"Add a DNS TXT record\" to create a DNS TXT record with your domain hosting service. Wait for your DNS configuration to change, which may take up to 72 hours. You can confirm your DNS configuration has changed by running the dig command on the command line, replacing TXT-RECORD-NAME with the name of the TXT record created in your DNS configuration. You should see your new TXT record listed in the command output. ``` dig TXT-RECORD-NAME +nostats +nocomments +nocmd TXT ``` After confirming your TXT record is added to your DNS, follow steps one through three above to navigate to your organization's approved and verified domains. To the right of the domain that's pending verification, select the dropdown menu, then click Continue verifying. Click Verify. Optionally, once the \"Verified\" badge is visible on your organization's profile page, you can delete the TXT entry from the DNS record at your domain hosting service. Note: The ability to approve a domain not owned by your organization or enterprise is currently in beta and subject to change. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. Next to \"Verified & approved domains for your enterprise account\", click Add a domain. Under \"What domain would you like to add?\", type the domain you'd like to verify, then click Add domain. To the right of \"Can't verify this domain?\", click Approve it instead. Read the information about domain approval, then click Approve DOMAIN. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. To the right of the domain to remove, select the dropdown menu, then click Delete. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.spyderbat.com.md", "project_name": "Spyderbat", "subcategory": "Security & Compliance" }
[ { "data": "Everything you need to know about our cloud-native runtime security platform that will provide you with continuous security monitoring, improved observability and timely alerting. Try Spyderbat Community Edition for Free Set up your test organization and deploy up to 5 Nano Agents Spyderbat System Requirements Learn about infrastructure prerequisites and supported OS types before deploying Spyderbat How to Install a Spyderbat Nano Agent onto a K8s Cluster How-To guide and a 6-minute video to get you started How to navigate Spyderbat Investigation Get started with monitoring your environment at runtime Spyderbat Dashboards from A to Z Latest runtime security findings at your fingertips Invite your team to experience Spyderbat potential Manage your users and their access permissions Spyderbat API Integrations View complete list of integrations Last updated 4 months ago Was this helpful? SPYDERBAT, Inc., All Rights Reserved" } ]
{ "category": "Provisioning", "file_name": "createapplication.md", "project_name": "StackHawk", "subcategory": "Security & Compliance" }
[ { "data": "Creates a new Stackhawk Application for this organization. Applications group scan results from HawkScan, and coordinate settings to influence HawkScan behavior. The application may also contain a specific environment name and id. For more information, see Applications. | Parameter | Default | Description | |:|:-|:| | applicationId | \"\" | The UUID identifier of this application. | | name | \"\" | The name of this application. | | env | \"\" | The name of this environment. Only set when requesting Application Environments. | | envId | \"\" | The UUID identifier of this environment. | | applicationStatus | \"ACTIVE\" | The status of this application. Either ACTIVE or ENV_INCOMPLETE. | The env and envId fields are not populated on the response with this endpoint. Teams Supported If the user belongs to a StackHawk Team, a created application will belong to the user's Team. Audited This is recorded as APPLICATION_ADDED User Activity." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "StackHawk", "subcategory": "Security & Compliance" }
[ { "data": "HawkScan Most web applications have pages that are only accessible to authenticated users. To effectively scan for vulnerabilities, you must test all paths, including the authenticated routes. HawkScan uses the stackhawk.yml configuration file to run scans based on your applications unique setup. To test authenticated routes, configure stackhawk.yml with your applications authentication flow details. To set up Authenticated Scanning, configure your stackhawk.yml file with the following information about your application: The first piece of information you need to set up Authenticated Scanning is your applications Authentication Type (AuthN). HawkScan supports the following authentication types: Third-party/OAuth: Most modern applications leaverage OAuth SaaS products to manage login credentials and security. HawkScan supports scripting to automate the grant type. The best grant types to automate with scripting are Client Credential and Resource Owner flows. See Third-party/OAuth for more information on configuring HawkScan for third-party OAuth authentication. Inject Cookie or Token: Another way to authenticate that HawkScan supports is externally supplying an authorization token which can be used in conjunction with either a token or a cookie to maintain the session. See Inject Cookie or Token for more information on configuring HawkScan for inject cookie or token authentication. Form with Username + Password: A common way to authenticate to a web application is by POSTing a username and password which can be verified by your server. Upon verification the server returns a cookie or token to the requesting client. See Form with Username + Password for more information on configuring HawkScan for form with username + password authentication. The second piece of information you need to set up Authenticated Scanning is how HawkScan should maintain authorization (AuthZ) throughout the scan. Session Authorization can be configured in one of the following ways: The first type of authorization HawkScan supports is cookie authorization. Cookie authorization is when upon verification the server returns a new cookie to the requesting client. The cookie is used to track your session on the server with the expectation that subsequent requests send the cookie back with the Set-Cookie response header. This cookie allows the server to track requests and maintain the" }, { "data": "See Cookie Authorization for more information on configuring HawkScan for cookie authorization. Many modern web application backends are APIs that serve data to more than just HTML-based web browsers. A common approach for authentication in this scenario is to create an API route that accepts a users credentials with a POST request of JSON data with the request returning an Authorization token as part of the JSON response payload. Once the Authorization token is obtained, it is then passed by the client as an Authorization header on all subsequent requests to protected routes. This method of authorization is commonly referred to as bearer token authorization. This approach is common for single page applications that use modern JavaScript frameworks like Angular, React, Vue.js, and others. See Bearer Token Authorization for more information on configuring HawkScan for bearer token authorization. Custom authentication and session management scripts can be used to handle complex authentication and authorization scenarios. If a preconfigured authentication and/or authorization style doesnt meet your needs, you can replace either with a custom script. HawkScan supports writing custom scripts in JavaScript and Kotlin with scripting support. Visit our GitHub Repo to get started. See Custom Authentication Script (script) for more information on configuring HawkScan for custom scripting authorization. The third piece of information you need to set up Authenticated Scanning is a means of testing for successful authentication. You must provide a testPath configuration to verify HawkScan authenticated its session correctly before scanning the application. The testPath configuration includes the requestMethod and requestBody options to support alternate HTTP request verbs, such as POST or PUT. The default action is GET. For more information on testpath configuration see app.authentication.testPath The fourth and final piece of information you need to set up Authenticated Scanning relates to some data points that can tell HawkScan if it is logged in or out. Throughout the scan, HawkScan will check to see if it is still logged in by the loggedInIndicator and loggedOutIndicator, found in your stackhawk.yml file under app.authentication. These are regex strings used to match against the following two main patterns: 2024 StackHawk Inc. Crafted in Colorado" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "StackRox", "subcategory": "Security & Compliance" }
[ { "data": "StackRox is the sister project of Red Hat Advanced Cluster Security. The project documentation is publicly available through OpenShift documentation to select specific applicationversions. We encourage your feedback by emailing community@stackrox.com, or joining the monthly community meetings to discuss the future of the projectdocumentation. Get started in your Kubernetes clustersquickly Install with Helm (Recommended) Install with the roxctl CLI Install with the OpenShift operator Release Notes Architecture Configuration Operation Integrations Backup & Restore Upgrading roxctl CLI Red Hat powers the StackRox community. The goal is to create a Kubernetes security-focused space for open source projects and education." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Teleport", "subcategory": "Security & Compliance" }
[ { "data": "The easiest, most secure way to access and protect your infrastructure Teleport Access On-demand, least privileged access, on a foundation of cryptographic identity and zero trust Teleport Identity Harden your infrastructure with Identity governance and security Teleport Policy Unify and control access policies across all of your infrastructure Modernizing Secure Access to Infrastructure Download White Paper Works with everything you have. Supported Protocols & Resource Types SSH, Kubernetes, Databases, Web Apps, Windows, Cloud Explore Integrations Access to your clouds, data centers, and everything in them. More than 170 integrations. What is Identity-Native Infrastructure Access? Download Book Why customers adopt Teleport Improve Engineer & Workforce Productivity Access to the infrastructure engineers need, when they need it Protect Infrastructure from Identity-Based Attacks Remove secrets and standing privileges as attack surfaces Meet Compliance Requirements FedRAMP, SOC 2, HIPAA, PCI, ISO 27001 Kubernetes in the Enterprise Download Report Industries with infrastructure access complexity E-Commerce & Entertainment Securing access at scale Financial Services Preventing breaches and maintaining customer trust Software-as-a-Service (SaaS) Providers Access control for growth and governance What is Identity-Native Infrastructure Access? Download Book Meet regulatory requirements for access control FedRAMP SOC 2 HIPAA Modernizing Access to Mitigate Security Risk & Speed Threat Response Feb 15 @ 9AM PT Register now Strategic relationships that enhance customer value Amazon Web Services (AWS) Control access to your critical AWS resources. Managing Multi-Account AWS Console and CLI Access with Teleport Watch Webinar Technical resources Documentation How It Works Tech Papers Tutorials Security Get hands-on experience with Teleport Try Teleport For Free Teleport Labs Teleport Connect Expert perspectives Blog Podcasts Webinars Introducing Teleport 15 Feb 1, 2024 What's new at Teleport News Blog Careers About Find out more Events Customers Partners Teleport Academy No More Backdoors: Know Who Has Access to What, Right Now June 13, 2024 Register Today Teleport Teleport is the easiest and most secure way to access and protect all your" }, { "data": "The Teleport Access Platform is a suite of software and managed services that delivers on-demand, least-privileged access to infrastructure on a foundation of cryptographic identity and Zero Trust, with built-in identity security and policy governance. Organizations use the Teleport Access Platform to: The Teleport Access Platform consists of three products: Teleport Access provides Zero Trust connectivity to all of your infrastructure. You can enable users to access servers, databases, and other infrastructure components over the public internet, even if those components are protected behind a firewall. All infrastructure resources belong to a unified inventory, with a single role-based access controls system to allow for least-privilege access. You can enable users to authenticate to resources using Single Sign-On providers like Okta, and Teleport itself can act as an identity provider for external services. Connectivity takes place through short-lived credentials that Teleport components verify through strong cryptographic techniques. Get started with Teleport Access: Teleport Identity is an add-on to Teleport Access that offers identity governance and security for all of your infrastructure. You can provide users with less-privileged roles by default, requiring any user who wants additional permissions to request them for a limited time. This approach leaves no permanent admin roles for attackers to target. As an additional layer of protection against phishing and exfiltration, you can restrict access to users with trusted devices. And during active security incidents, you can lock down specific users, roles, infrastructure resources and more, letting you contain the security incident with minimal disruption. An audit log provides visibility into access patterns, so you can identify weak restrictions and potential security breaches. Get started with Teleport Identity: Teleport Policy unifies and controls access policies across all your infrastructure. With Teleport Access Graph, you gain insights into role-based access control policies within Teleport and your cloud provider. Get started with Teleport Access Graph. The Teleport Access Platform consists of a certificate authority and identity-aware access proxy that run either on the Teleport-managed cloud or, in special cases, a self-hosted private network. Teleport Agents, which can run on Linux servers or Kubernetes, proxy access to infrastructure resources and cloud provider APIs. Users authenticate to infrastructure resources through Teleport Agents using short-lived certificates. Certificates embed Teleport user attributes, allowing Teleport Agents to enforce role-based access controls. Learn more: Was this page helpful?" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "Tetragon", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Tetragon", "subcategory": "Security & Compliance" }
[ { "data": "Table of content | Title | Authors | Conference | Date | |:--|:-|:--|-:| | eBPF-based, Kubernetes-native: observability & security with Tetragon | Anna Kapuciska | TechSpot | 2024 | | Past, Present, Future of Tetragon- First Production Use Cases, Lessons Learnt, Where Are We Heading? | John Fastabend & Natlia Rka Ivnk | KubeCon EU | 2023 | | eBPF and Kubernetes Better Together! Observability and Security with Tetragon | Anna Kapuciska & James Laverack | Kubernetes Community Days UK | 2023 | | The Next Log4jshell?! Preparing for CVEs with eBPF! | John Fastabend & Natlia Rka Ivnk | KubeCon EU | 2023 | | Tutorial: Getting Familiar with Security Observability Using eBPF & Cilium Tetragon | Duffie Cooley & Raphal Pinson | KubeCon EU | 2023 | | Securing the Superpowers: Who Loaded That eBPF Program? | John Fastabend & Natlia Rka Ivnk | CloudNative SecurityCon NA | 2023 | | Container Security and Runtime Enforcement with Tetragon | Djalal Harouni | eBPF Summit | 2022 | | You and Your Security Profiles; Generating Security Policies with the Help of eBPF | John Fastabend & Natlia Rka Ivnk | eBPF Day North America | 2022 | | Keeping your cluster safe from attacks with eBPF | Jed Salazar & Natlia Rka Ivnk | eBPF Summit | 2021 | | Uncovering a Sophisticated Kubernetes Attack in Real Time Part II. | Jed Salazar & Natlia Rka Ivnk | OReilly Superstream Series, Infrastructure & Ops | 2021 | | Uncovering a Sophisticated Kubernetes Attack in Real-Time | Jed Salazar & Natlia Rka Ivnk | KubeCon EU | 2020 | Security Observability with eBPF - Jed Salazar & Natlia Rka Ivnk, OReilly, 2022 Tetragon 1.0: Kubernetes Security Observability & Runtime Enforcement with eBPF - Thomas Graf, 2023 Tutorial: Setting Up a Cybersecurity Honeypot with Tetragon to Trigger Canary Tokens - Dean Lewis, 2023 Can I use Tetragon without Cilium? - Dean Lewis, 2023 Detecting a Container Escape with Cilium and eBPF - Natlia Rka Ivnk, 2021 Detecting and Blocking log4shell with Isovalent Cilium Enterprise - Jed Salazar, 2021 Getting Started with Tetragon - Natlia Rka Ivnk, Roland Wolters, Raphal Pinson Exploring Tetragon - A Security Observability Tool for Kubernetes, Docker, and Linux - Ivan Velichko eBPF for Runtime Enforcement | Tetragon Introduction and Overview - Rawkode Academy Restrict Access to Secure Files with Tetragon | eBPF Runtime Enforcement - Rawkode Academy 2024 The Tetragon Authors. All rights reserved The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page. Linux is a registered trademark of Linus Torvalds. Privacy Policy and Terms of Use." } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "Tetragon", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "Tetragon", "subcategory": "Security & Compliance" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | vendor | vendor | _vendor | nan | nan | | archetypes | archetypes | archetypes | nan | nan | | assets | assets | assets | nan | nan | | content/en | content/en | content/en | nan | nan | | data | data | data | nan | nan | | i18n | i18n | i18n | nan | nan | | layouts | layouts | layouts | nan | nan | | security-observability-with-ebpf | security-observability-with-ebpf | security-observability-with-ebpf | nan | nan | | static | static | static | nan | nan | | .gitignore | .gitignore | .gitignore | nan | nan | | Dockerfile.hugo | Dockerfile.hugo | Dockerfile.hugo | nan | nan | | Makefile | Makefile | Makefile | nan | nan | | README.md | README.md | README.md | nan | nan | | go.mod | go.mod | go.mod | nan | nan | | go.sum | go.sum | go.sum | nan | nan | | hugo.toml | hugo.toml | hugo.toml | nan | nan | | package-lock.json | package-lock.json | package-lock.json | nan | nan | | package.json | package.json | package.json | nan | nan | | View all files | View all files | View all files | nan | nan | See https://tetragon.io/docs/contribution-guide/documentation." } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Tetragon", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Tetragon", "subcategory": "Security & Compliance" }
[ { "data": "Table of content Welcome to Tetragon :) ! Were happy youre interested in contributing to the Tetragon project. While this document focuses on the technical details of how to submit patches to the Tetragon project, we value all kinds of contributions. For example, actions that can greatly improve Tetragon and contribute to its success could be: This section of the Tetragon documentation will help you make sure you have an environment capable of testing changes to the Tetragon source code, and that you understand the workflow of getting these changes reviewed and merged upstream. Make sure you have a GitHub account. Fork the Tetragon repository to your GitHub user or organization. The repository is available under github.com/cilium/tetragon. (Optional) Turn off GitHub actions for your fork. This is recommended to avoid unnecessary CI notification failures on the fork. Clone your fork and set up the base repository as upstream remote: ``` git clone https://github.com/${YOURGITHUBUSERNAMEORORG}/tetragon.git cd tetragon git remote add upstream https://github.com/cilium/tetragon.git ``` Prepare your development setup. Check out GitHub good first issues to find something to work on. If this is your first Tetragon issue, try to start with something small that you think you can do without too much external help. Also avoid assigning too many issues to yourself (see Dont Lick the Cookie!). Follow the steps in making changes to start contributing. Learn how to run the tests or how to preview and contribute to the docs. Learn how to submit a pull request to the project. Please accept our gratitude for taking the time to improve Tetragon! :) This will help you getting started with your development setup to build Tetragon Learn how to make your first changes to the project Learn how to run the tests of the project Learn how to contribute to the documentation Learn how to submit a pull request to the project Learn about the sign-off procedure Guide on how to write release notes for new contributions. 2024 The Tetragon Authors. All rights reserved The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page. Linux is a registered trademark of Linus Torvalds. Privacy Policy and Terms of Use." } ]
{ "category": "Provisioning", "file_name": "overview#functionality-overview.md", "project_name": "Tetragon", "subcategory": "Security & Compliance" }
[ { "data": "To view TCP connect events, apply the example TCP connect TracingPolicy: ``` kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/tcp-connect.yaml ``` To start monitoring events in the xwing pod run the Tetragon CLI: ``` kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespace default --pod xwing ``` In another terminal, start generate a TCP connection. Here we use curl. ``` kubectl exec -it xwing -- curl http://cilium.io ``` The output in the first terminal will capture the new connect and write, ``` process default/xwing /usr/bin/curl http://cilium.io connect default/xwing /usr/bin/curl tcp 10.244.0.6:34965 -> 104.198.14.52:80 sendmsg default/xwing /usr/bin/curl tcp 10.244.0.6:34965 -> 104.198.14.52:80 bytes 73 close default/xwing /usr/bin/curl tcp 10.244.0.6:34965 -> 104.198.14.52:80 exit default/xwing /usr/bin/curl http://cilium.io 0 ``` To disable the TracingPolicy run: ``` kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/tcp-connect.yaml ``` 2024 The Tetragon Authors. All rights reserved The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page. Linux is a registered trademark of Linus Torvalds. Privacy Policy and Terms of Use." } ]
{ "category": "Provisioning", "file_name": "express.md", "project_name": "Topaz", "subcategory": "Security & Compliance" }
[ { "data": "Aserto::Authorization is a middleware that allows Ruby applications to use Topaz as the Authorization provider. Add to your application Gemfile: ``` gem \"aserto\"``` And then execute: ``` bundle install``` Or install it yourself as: ``` gem install aserto``` The middleware accepts the following parameters: | Parameter name | Default value | Description | |:-|:|:--| | enabled | true | Enables or disables Aserto Authorization | | policy_name | \"\" | The Aserto policy name. | | instance_label | \"\" | The label of the active policy runtime. | | authorizerapikey | `\"\"`` | The authorizer API Key | | tenant_id | \"\" | The Aserto Tenant ID | | service_url | \"localhost:8282\" | Sets the URL for the authorizer endpoint. | | cert_path | \"\" | Path to the grpc service certificate when connecting to local topaz instance. | | decision | \"allowed\" | The decision that will be used by the middleware when creating an authorizer request. | | logger | STDOUT | The logger to be used by the middleware. | | identity_mapping | { type: :none } | The strategy for retrieving the identity, possible values: :jwt, :sub, :none | | disabled_for | [{}] | Which path and actions to skip the authorization for. | | on_unauthorized | -> { return [403, {}, [\"Forbidden\"]] } | A lambda that is executed when the authorization fails. | To determine the identity of the user, the middleware can be configured to use a JWT token or a claim using the identity_mapping config. ``` ``` The whole identity resolution can be overwritten by providing a custom function. ``` By default, when computing the policy path, the middleware: This behavior can be overwritten by providing a custom function: ``` A resource can be any structured data that the authorization policy uses to evaluate decisions. By default, middleware do not include a resource in authorization calls. This behavior can be overwritten by providing a custom function: ``` The middleware exposes a disable_for configuration option that accepts an array of hashes with the following keys: You can find the paths and actions using bundle exec rails routes ``` bundle exec rails routes Prefix Verb URI Pattern Controller#Action apiv1users GET /api/users(.:format) api/v1/users#index {:format=>:json} POST /api/users(.:format) api/v1/users#create {:format=>:json} apiv1user GET /api/users/:id(.:format) api/v1/users#show {:format=>:json}``` ``` ``` ```" } ]
{ "category": "Provisioning", "file_name": "api-client.md", "project_name": "Topaz", "subcategory": "Security & Compliance" }
[ { "data": "Topaz authorization SDK for Node.js, as well as Express.js-compatible middleware. This SDK is open source and can be found on GitHub. This package provides multiple capabilities: Using npm: ``` npm install @aserto/aserto-node``` Using yarn: ``` yarn add @aserto/aserto-node``` ``` interface Authorizer { config: AuthorizerConfig,};type AuthorizerConfig = { authorizerServiceUrl?: string; tenantId?: string; authorizerApiKey?: string; token?: string; authorizerCertFile?: string; insecure?: boolean;};``` ``` import { Authorizer } from \"@aserto/aserto-node\";const authClient = new Authorizer({ authorizerServiceUrl: \"localhost:8282\", authorizerCertFile: `${process.env.HOME}/.local/share/topaz/certs/grpc-ca.crt`});``` ``` import { Authorizer, identityContext, policyContext, policyInstance,} from \"@aserto/aserto-node\";const authClient = new Authorizer( { authorizerServiceUrl: \"localhost:8282\", authorizerCertFile: `${process.env.HOME}/.local/share/topaz/certs/grpc-ca.crt` },);authClient .Is({ identityContext: identityContext( \"rick@the-citadel.com\", \"SUB\" ), policyInstance: policyInstance(\"rebac\", \"rebac\"), policyContext: policyContext(\"rebac.check\", [\"allowed\"]), resourceContext: { objecttype: \"group\", objectid: \"evil_genius\", relation: \"member\", }, })``` ``` // Isawait authClient .Is({ identityContext: identityContext( \"morty@the-citadel.com\", \"SUB\" ), policyInstance: policyInstance(\"todo\", \"todo\"), policyContext: policyContext(\"todoApp.POST.todos\", [\"allowed\"]), resourceContext: { ownerID: \"fd1614d3-c39a-4781-b7bd-8b96f5a5100d\", }, })// Queryawait authClient .Is({ identityContext: identityContext( \"morty@the-citadel.com\", \"SUB\" ), policyInstance: policyInstance(\"todo\", \"todo\"), policyContext: policyContext(\"todoApp.POST.todos\", [\"allowed\"]), resourceContext: { ownerID: \"fd1614d3-c39a-4781-b7bd-8b96f5a5100d\", }, query: \"x = data\", })// DecisionTreeawait authClient .DecisionTree({ identityContext: identityContext( \"morty@the-citadel.com\", \"SUB\" ), policyInstance: policyInstance(\"todo\", \"todo\"), policyContext: policyContext(\"todoApp.POST.todos\", [\"allowed\"]), resourceContext: { ownerID: \"fd1614d3-c39a-4781-b7bd-8b96f5a5100d\", }, })// ListPoliciesawait authClient .ListPolicies({ policyInstance: policyInstance(\"todo\", \"todo\") })``` When authorization middleware is configured and attached to a server, it examines incoming requests, extracts authorization parameters like the caller's identity, calls the Aserto authorizers, and rejects messages if their access is denied. failWithError: When set to true, will forward errors to next instead of ending the response directly. ``` interface Middleware { client: Authorizer; policy: Policy; resourceMapper?: ResourceMapper; identityMapper?: IdentityMapper; policyMapper?: PolicyMapper; failWithError?: boolean;}type Policy = { root: string; name?: string; instanceLabel?: string; decision?: string; path?: string;};type CheckOptions = { object?: ObjectMapper; objectId?: string | StringMapper; objectType?: string | StringMapper; relation?: string | StringMapper; subjectType?: string;};type ResourceMapper = | ResourceContext | ((req?: Request) => Promise<ResourceContext>);type IdentityMapper = (req?: Request) => Promise<IdentityContext>;type PolicyMapper = (req?: Request) => Promise<PolicyContext>;``` ``` function Authz()function Check(options: CheckOptions)``` ``` const app: express.Application = express();// Standard RESTconst restMw = new Middleware({ client: client, policy: { name: 'todo', instanceLabel: 'todo', root: 'todoApp', }, resourceMapper: async (req: express.Request) => { if (!req.params?.id) { return {}; } const todo = await store.get(req.params.id); return { ownerID: todo.OwnerID }; },})app.get(\"/todos\", checkJwt, restMw.Authz(), server.list.bind(server));app.post(\"/todos\", checkJwt, restMw.Authz(), server.create.bind(server));app.put(\"/todos/:id\", checkJwt, restMw.Authz(), server.update.bind(server));// Checkconst rebacMw = new Middleware({ client: authClient, policy: { name: 'policy-rebac', instanceLabel: 'policy-rebac', root: 'rebac', }})// Only users that are in the `evilgenius` group are allowed to delete todos.app.delete(\"/todos/:id\", checkJwt, rebacMw.Check({ objectType: \"group\", objectId: \"evilgenius\" relation: \"member\",}))``` To determine the identity of the user, the middleware can be configured to use a JWT token or a claim using the IdentityMapper. ``` // use the identity type subimport { SubIdentityMapper } from \"@aserto/aserto-node\";const restMw = new Middleware({ client: authClient, policy: policy, identityMapper: SubIdentityMapper,})// use the jwt type sub from a custom headerimport { JWTIdentityMapper } from \"@aserto/aserto-node\";const restMw = new Middleware({ client: authClient, policy: policy, identityMapper: JWTIdentityMapper(\"my-header\");,})``` ``` // use the manual identity typeimport { ManualIdentityMapper } from \"@aserto/aserto-node\";const restMw = new Middleware({ client: authClient, policy: policy, identityMapper: ManualIdentityMapper(\"my-identity\");,})``` The whole identity resolution can be overwritten by providing a custom" }, { "data": "``` // needs to return an IdentityContextimport { identityContext } from \"@aserto/aserto-node\";const restMw = new Middleware({ client: authClient, policy: policy, identityMapper: async () => { return identityContext('test', 'SUB') },})``` The authorization policy's ID and the decision to be evaluated are specified when creating authorization Middleware, but the policy path is often derived from the URL or method being called. By default, the policy path is derived from the URL path. To provide custom logic, use a PolicyMapper. For example: ``` // needs to return an IdentityContextimport { identityContext } from \"@aserto/aserto-node\";const restMw = new Middleware({ client: authClient, policy: policy, policyMapper: async () => { return policyContext('path', ['decision']) }})``` A resource can be any structured data that the authorization policy uses to evaluate decisions. By default, the request params are included in the ResourceContext. This behavior can be overwritten by providing a custom function: ``` const restMw = new Middleware({ client: authClient, policy: policy, resourceMapper: async () => { return { customKey: \"customValue\" }; },})``` ``` // provides a custom resource context,type ResourceMapper = | ResourceContext | ((req?: Request) => Promise<ResourceContext>);// examplesasync (req: Request) => { return { customKey: req.params.id } };// or just a plain resource context{ customKey: \"customValue\" }``` ``` type IdentityMapper = (req?: Request) => Promise<IdentityContext>;// You can also use the built-in policyContext function to create a identity context and pass it as the mapper responseconst identityContext = (value: string, type: keyof typeof IdentityType) => {IdentityType { / * Unknown, value not set, requests will fail with identity type not set error. * * @generated from enum value: IDENTITY_TYPE_UNKNOWN = 0; */ UNKNOWN = 0, / None, no explicit identity context set, equals anonymous. @generated from enum value: IDENTITY_TYPE_NONE = 1; / NONE = 1, / * Sub(ject), identity field contains an oAUTH subject. * * @generated from enum value: IDENTITY_TYPE_SUB = 2; */ SUB = 2, / JWT, identity field contains a JWT access token. @generated from enum value: IDENTITY_TYPE_JWT = 3; / JWT = 3, / Manual, propagates thw identity field as-is, without validation, into the input object. @generated from enum value: IDENTITY_TYPE_MANUAL = 4; / MANUAL = 4}// exampleidentityContext(\"morty@the-citadel.com\", \"SUB\")``` ``` type PolicyMapper = (req?: Request) => Promise<PolicyContext>;// You can also use the built-in policyContext function to create a policy context and pass it as the mapper responsepolicyContext = (policyPath: string, decisionsList: Array<string> = [\"allowed\"])// ExamplepolicyContext(\"todoApp.POST.todos\", [\"allowed\"])``` The Directory APIs can be used to get, set or delete object instances, relation instances and manifests. They can also be used to check whether a user has a permission or relation on an object instance. ``` type ServiceConfig = { url?: string; tenantId?: string; apiKey?: string; caFile?: string; rejectUnauthorized?: boolean;};type DirectoryV3Config = ServiceConfig & { reader?: ServiceConfig; writer?: ServiceConfig; importer?: ServiceConfig; exporter?: ServiceConfig; model?: ServiceConfig;};``` You can initialize a directory client as follows: ``` import { DirectoryServiceV3 } from \"@aserto/aserto-node\";const directoryClient = DirectoryServiceV3({ url: 'localhost:9292', caFile: `${process.env.HOME}/.local/share/topaz/certs/grpc-ca.crt`});- `url`: hostname:port of directory service (required)- `apiKey`: API key for directory service (required if using hosted directory)- `tenantId`: Aserto tenant ID (required if using hosted directory)- `caFile`: Path to the directory CA file. (optional)- `rejectUnauthorized`: reject clients with invalid certificates. Defaults to `true`.- `reader`: ServiceConfig for the reader client(optional)- `writer`: ServiceConfig for the writer client(option)- `importer`: ServiceConfig for the importer client(option)- `exporter`: ServiceConfig for the exporter client(option)- `model`: ServiceConfig for the model client(option)``` Define a writer client that uses the same credentials but connects to" }, { "data": "All other services will have the default configuration ``` import { DirectoryServiceV3 } from \"@aserto/aserto-node\";const directoryClient = DirectoryServiceV3({ url: 'localhost:9292', writer: { url: 'localhost:9393' }});``` object({ objectType: \"type-name\", objectId: \"object-id\" }): Get an object instance with the type type-name and the id object-id. For example: ``` const user = await directoryClient.object({ objectType: 'user', objectId: 'euang@acmecorp.com' });// Handle a specific Directory Errorimport { NotFoundError } from \"@aserto/aserto-node\"try { directoryClient.object({ objectType: \"user\", objectId: \"euang@acmecorp.com\", });} catch (error) { if (error instanceof NotFoundError) { // handle the error } throw error;}``` ``` relation({ subjectType: 'subject-type', subjectId: 'subject-id', relation: 'relation-name', objectType: 'object-type', objectId: 'object-id', })``` Get a relation of a certain type between as subject and an object. For example: ``` const identity = 'euang@acmecorp.com';const relation = await directoryClient.relation({ subjectType: 'user', subjectId: 'euang@acmecorp.com', relation: 'identifier', objectType: 'identity' objectId: identity});``` ``` relations({ subjectType: 'subject-type', relation: 'relation-name', objectType: 'object-type', objectId: 'object-id', })``` Get an array of relations of a certain type for an object instance. For example: ``` const identity = 'euang@acmecorp.com';const relations = await directoryClient.relation({ subjectType: 'user', relation: 'identifier', objectType: 'identity' objectId: identity});``` setObject({ object: $Object }): Create an object instance with the specified fields. For example: ``` const user = await directoryClient.setObject( { object: { type: \"user\", id: \"test-object\", properties: { displayName: \"test object\" } } });``` setRelation({ relation: Relation }): Create a relation with a specified name between two objects. For example: ``` const relation = await directoryClient.setRelation({ subjectId: 'subjectId', subjectType: 'subjectType', relation: 'relationName', objectType: 'objectType', objectId: 'objectId',});``` deleteObject({ objectType: \"type-name\", objectId: \"object-id\", withRelations: false }): Deletes an object instance with the specified type and key. For example: ``` await directoryClient.deleteObject({ objectType: 'user', objectId: 'euang@acmecorp.com' });``` deleteRelation({ objectType: string, objectId: string, relation: string, subjectType: string, subjectId: string, subjectRelation: string }): Delete a relation: ``` await directoryClient.deleteRelation({ subjectType: 'subjectType', subjectId: 'subjectId', relation: 'relationName', objectType: 'objectType', objectId: 'objectId',});``` You can evaluate graph queries over the directory, to determine whether a subject (e.g. user) has a permission or a relation to an object instance. checkPermission({ objectType: string, objectId: string, permission: string, subjectType: string, subjectId: string, trace: boolean }): Check that an user object with the key euang@acmecorp.com has the read permission in the admin group: ``` const check = await directoryClient.checkPermission({ subjectId: 'euang@acmecorp.com', subjectType: 'user', permission: 'read', objectType: 'group', objectId: 'admin',});``` checkRelation({ objectType: string, objectId: string, relation: string, subjectType: string, subjectId: string, trace: boolean }): Check that euang@acmecorp.com has an identifier relation to an object with key euang@acmecorp.com and type identity: ``` const check = directoryClient.checkRelation({ subjectId: 'euang@acmecorp.com', subjectType: 'user', name: 'identifier', objectType: 'identity', objectId: 'euang@acmecorp.com',});``` ``` const identity = 'euang@acmecorp.com';const relation = await directoryClient.relation( { subjectType: 'user', objectType: 'identity', objectId: identity, relation: 'identifier', subjectId: 'euang@acmecorp.com' });if (!relation) { throw new Error(`No relations found for identity ${identity}`)};const user = await directoryClient.object( { objectId: relation.subjectId, objectType: relation.subjectType });``` You can get, set, or delete the manifest ``` await directoryClient.getManifest();``` ``` await directoryClient.setManifest(`# yaml-language-server: $schema=https://www.topaz.sh/schema/manifest.json### model ###model: version: 3### object type definitions ###types: ### displayname: User ### user: relations: ### displayname: user#manager ### manager: user ### displayname: Identity ### identity: relations: ### displayname: identity#identifier ### identifier: user ### displayname: Group ### group: relations: ### displayname: group#member ### member: user permissions: read: member`);``` ``` await directoryClient.deleteManifest();``` ``` import { ImportMsgCase, ImportOpCode, objectPropertiesAsStruct } from \"@aserto/aserto-node\"const importRequest = createAsyncIterable([ { opCode: ImportOpCode.SET, msg: { case: ImportMsgCase.OBJECT, value: { id: \"import-user\", type: \"user\", properties: objectPropertiesAsStruct({ foo: \"bar\" }), displayName: \"name1\", }, }, }, { opCode: ImportOpCode.SET, msg: { case:" }, { "data": "value: { id: \"import-group\", type: \"group\", properties: {}, displayName: \"name2\", }, }, }, { opCode: ImportOpCode.SET, msg: { case: ImportMsgCase.RELATION, value: { subjectId: \"import-user\", subjectType: \"user\", objectId: \"import-group\", objectType: \"group\", relation: \"member\", }, }, },]);const resp = await directoryClient.import(importRequest);await (readAsyncIterable(resp))``` ``` const response = await readAsyncIterable( await directoryClient.export({ options: \"all\" }))``` Use the displayStateMap middleware to set up an endpoint that returns the display state map to a caller. The endpoint is named displaystatemap by default, but can be overridden in options. ``` const { displayStateMap } = require('@aserto/aserto-node');const options = { authorizerServiceUrl: 'localhost:8282', // required - must pass a valid host:port policyRoot: 'policy' // required - must be a string representing the policy root (the first component of the policy module name)};app.use(displayStateMap(options));``` displayStateMap(options) is provides an explicit mechanism for calling the Aserto authorizer. Use the is function to call the authorizer with a decision, policy, and resource, and get a boolean true or false response. The decision is a named value in the policy: the string allowed is used by convention. Examples: is('allowed'), is('enabled'), is('visible'), etc. ``` const { is } = require('@aserto/aserto-node');const options = { authorizerServiceUrl: 'localhost:8282', // required - must pass a valid host:port policyRoot: 'policy' // required - must be a string representing the policy root (the first component of the policy module name)};app.get('/users/:id', async function(req, res) { try { const allowed = await is('allowed', req, options); if (allowed) { ... } else { res.status(403).send(\"Unauthorized\"); } } catch (e) { res.status(500).send(e.message); }});``` is(decision, req, options[, packageName[, resourceMap]]): This is simply a string that is correlates to a decision referenced in the policy: for example, allowed, enabled, etc. The Express request object. By convention, Aserto policy package names are of the form policyRoot.METHOD.path. By default, the package name will be inferred from the policy name, HTTP method, and route path: Passing in the packageName parameter into the is() function will override this behavior. By default, the resource map will be req.params. For example, if the route path is /api/users/:id, the resource will be { 'id': 'value-of-id' }. Passing in the resourceMapper parameter into the is() function will override this behavior. The Topaz authorizer exposes SSL-only endpoints. In order for a Node.js policy to properly communicate with the authorizer, TLS certificates must be verified. In order for the aserto-node package to perform the TLS handshake, it needs to verify the TLS certificate of the Topaz authorizer using the certificate of the CA that signed it - which was placed in $HOME/.local/share/topaz/certs/grpc-ca.crt. Therefore, in order for this middleware to work successfully, either the authorizerCertCAFile must be set to the correct path for the CA cert file, or the disableTlsValidation flag must be set to true. Furthermore, when packaging a policy for deployment (e.g. in a Docker container) which uses aserto-node to communicate with an authorizer that has a self-signed TLS certificate, you must copy this CA certificate into the container as part of the Docker build (typically performed in the Dockerfile). When you do that, you'll need to override the authorizerCertCAFile option that is passed into any of the API calls defined above with the location of this cert file. Alternately, to ignore TLS certificate validation when creating a TLS connection to the authorizer, you can set the disableTlsValidation option to true and avoid TLS certificate validation. This option is not recommended for production." } ]
{ "category": "Provisioning", "file_name": "middleware.md", "project_name": "Topaz", "subcategory": "Security & Compliance" }
[ { "data": "Aserto::Authorization is a middleware that allows Ruby applications to use Topaz as the Authorization provider. Add to your application Gemfile: ``` gem \"aserto\"``` And then execute: ``` bundle install``` Or install it yourself as: ``` gem install aserto``` The middleware accepts the following parameters: | Parameter name | Default value | Description | |:-|:|:--| | enabled | true | Enables or disables Aserto Authorization | | policy_name | \"\" | The Aserto policy name. | | instance_label | \"\" | The label of the active policy runtime. | | authorizerapikey | `\"\"`` | The authorizer API Key | | tenant_id | \"\" | The Aserto Tenant ID | | service_url | \"localhost:8282\" | Sets the URL for the authorizer endpoint. | | cert_path | \"\" | Path to the grpc service certificate when connecting to local topaz instance. | | decision | \"allowed\" | The decision that will be used by the middleware when creating an authorizer request. | | logger | STDOUT | The logger to be used by the middleware. | | identity_mapping | { type: :none } | The strategy for retrieving the identity, possible values: :jwt, :sub, :none | | disabled_for | [{}] | Which path and actions to skip the authorization for. | | on_unauthorized | -> { return [403, {}, [\"Forbidden\"]] } | A lambda that is executed when the authorization fails. | To determine the identity of the user, the middleware can be configured to use a JWT token or a claim using the identity_mapping config. ``` ``` The whole identity resolution can be overwritten by providing a custom function. ``` By default, when computing the policy path, the middleware: This behavior can be overwritten by providing a custom function: ``` A resource can be any structured data that the authorization policy uses to evaluate decisions. By default, middleware do not include a resource in authorization calls. This behavior can be overwritten by providing a custom function: ``` The middleware exposes a disable_for configuration option that accepts an array of hashes with the following keys: You can find the paths and actions using bundle exec rails routes ``` bundle exec rails routes Prefix Verb URI Pattern Controller#Action apiv1users GET /api/users(.:format) api/v1/users#index {:format=>:json} POST /api/users(.:format) api/v1/users#create {:format=>:json} apiv1user GET /api/users/:id(.:format) api/v1/users#show {:format=>:json}``` ``` ``` ```" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "Trivy-Operator", "subcategory": "Security & Compliance" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | docs | docs | docs | nan | nan | | getting-started | getting-started | getting-started | nan | nan | | images | images | images | nan | nan | | overrides | overrides | overrides | nan | nan | | specs | specs | specs | nan | nan | | tutorials | tutorials | tutorials | nan | nan | | faq.md | faq.md | faq.md | nan | nan | | index.md | index.md | index.md | nan | nan | | settings.md | settings.md | settings.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Provisioning", "file_name": "install.md", "project_name": "Topaz", "subcategory": "Security & Compliance" }
[ { "data": "topaz is available on Linux, macOS and Windows platforms. Binaries for Linux, Windows and Mac are available as tarballs in the release page. Via Homebrew for macOS or LinuxBrew for Linux ``` brew tap aserto-dev/tap && brew install aserto-dev/tap/topaz``` Via WinGet for Windows 10+ ``` winget install Aserto.Topaz``` On Windows 10+ Download the topazwindowsx86_64.msi file from the GitHub releases page and execute the MSI. Via a GO install ``` go install github.com/aserto-dev/topaz/cmd/topaz@latest``` These instructions help you get Topaz up and running as the authorizer for a sample Todo app. The Topaz authorizer is packaged as a Docker container. Before using the container image installer ensure that you have a Docker runtime such as Docker Desktop installed. You can get the latest image using the following command: ``` topaz install``` If you get the following errors/warnings from Topaz commands: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Be sure to allow the default Docker socket to be used in your Docker Desktop Advanced settings. Topaz has a set of pre-built templates that contain three types of artifacts: You can use the CLI to install the todo template: ``` topaz templates install todo``` This command installs configuration artifacts in the Topaz configuration directory. To find out where this is, see configuration. Unless you've set $XDGCONFIGHOME, this should be $HOME/.config/topaz/. ``` tree $HOME/.config/topaz/Users/ogazitt/.config/topaz cfg todo.yaml topaz.json``` The command also data and template artifacts in the Topaz data directory ($XDGDATAHOME/topaz), which defaults to $HOME/.local/share/topaz on Mac/Linux and $HOME\\AppData\\Local\\topaz on Windows. When Topaz starts, it will also create certificates in a certs directory under this path. ``` tree $HOME/.local/share/topaz/Users/ogazitt/.local/share/topaz certs gateway-ca.crt gateway.crt gateway.key grpc-ca.crt grpc.crt grpc.key db todo.db tmpl todo data citadelobjects.json citadelrelations.json todoobjects.json todorelations.json model manifest.yaml `certs/` contains a set of generated self-signed certificates for Topaz.that are based on the \"Rick & Morty\" cartoon. `db/todo.db` contains the embedded database which houses the model and data. `tmpl/todo/data/` contains the objects and relations that the template loads. The `citadel` files contain the users and groups associated with the \"Citadel\" demo IDP, and the `todo` files contain the template-specific objecs and relations. `tmpl/todo/model/manifest.yaml` contains the directory manifest for the Todo template.:::tipFor a deeper overview of the `cfg/config.yaml` file, see .:::#### What just happened?Besides laying down the artifacts mentioned, installing the Todo template did the following things: started Topaz in daemon (background) mode (see `topaz start --help`). set the manifest found in `tmpl/todo/model/manifest.yaml` (see `topaz directory set manifest --help`). imported the objects and relations found in `tmpl/todo/data/` (see `topaz directory import --help`). opened a browser window to the Topaz (see `topaz console --help`).Feel free to play around with the Topaz console! Or follow the next few steps to interact with the Topaz policy and authorization endpoints.### Issue an API callTo verify that Topaz is running with the right policy image, you can issue a `curl` call to interact with the REST API.This API call retrieves the set of policies that Topaz has loaded:```shellcurl -k https://localhost:8383/api/v2/policies``` Issue an authorization request using the is REST API to verify that the user Rick is allowed to GET the list of todos. ``` curl -k -X POST 'https://localhost:8383/api/v2/authz/is' \\-H 'Content-Type: application/json' \\-d '{ \"identitycontext\": { \"type\": \"IDENTITYTYPESUB\", \"identity\": \"rick@the-citadel.com\" }, \"policycontext\": { \"path\": \"todoApp.GET.todos\", \"decisions\": [\"allowed\"] }}'``` ``` topaz authorizer eval '{ \"identitycontext\": { \"type\": \"IDENTITYTYPESUB\", \"identity\": \"rick@the-citadel.com\" }, \"policycontext\": { \"path\": \"todoApp.GET.todos\", \"decisions\": [\"allowed\"] }}' -i``` To interact with the authorizer endpoint over gRPC, install grpcui or grpcurl and point them to localhost:8282: ``` grpcui --insecure localhost:8282``` To interact with the directory endpoint, use localhost:9292: ``` grpcui --insecure localhost:9292```" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "Trivy-Operator", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Trivy-Operator", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "Trivy-Operator", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Trivy-Operator", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Trivy", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "Trivy", "subcategory": "Security & Compliance" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | build | build | build | nan | nan | | community | community | community | nan | nan | | docs | docs | docs | nan | nan | | ecosystem | ecosystem | ecosystem | nan | nan | | getting-started | getting-started | getting-started | nan | nan | | imgs | imgs | imgs | nan | nan | | overrides | overrides | overrides | nan | nan | | tutorials | tutorials | tutorials | nan | nan | | index.md | index.md | index.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "Trivy", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]