tag
dict
content
listlengths
1
139
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "kiosk", "subcategory": "Automation & Configuration" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "kiosk", "subcategory": "Automation & Configuration" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "intro.md", "project_name": "KubeDL", "subcategory": "Automation & Configuration" }
[ { "data": "From project root directory, run ``` kubectl apply -f config/crd/bases/``` A single yaml file including everything: deployment, rbac etc. ``` kubectl apply -f https://raw.githubusercontent.com/kubedl-io/kubedl/master/config/manager/allinone.yaml``` KubeDL controller is installed under kubedl-system namespace. Running the command from master branch uses the daily docker image. ``` kubectl apply -f https://raw.githubusercontent.com/kubedl-io/kubedl/master/console/dashboard.yaml``` The dashboard will list nodes. Hence, its service account requires the list node permission. Check the dashboard. ``` kubectl delete namespace kubedl-system``` ``` kubectl get crd | grep kubedl.io | cut -d ' ' -f 1 | xargs kubectl delete crd``` ``` kubectl delete clusterrole kubedl-leader-election-rolekubectl delete clusterrolebinding kubedl-manager-rolebinding``` KubeDL supports all kinds of jobs(tensorflow, pytorch etc.) in a single Kubernetes operator. You can selectively enable the kind of jobs to support. There are three options:" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Kubefirst", "subcategory": "Automation & Configuration" }
[ { "data": "kubefirst is a free, fully automated, and instantly operational open source platform that includes some of the most popular open source tools available in the Kubernetes space, all working together in a click. By running our installer in your cloud, you'll get a GitOps cloud management and application delivery ecosystem complete with automated Terraform workflows, Vault secrets management, GitLab or GitHub integrations with Argo, and a demo application that demonstrates how it all pieces together. The fastest way to explore the kubefirst platform! With kubefirst k3d, you can explore some of the best parts of the kubefirst platform running for free on a local k3d cluster in 5 minutes - without any cloud costs or domain prerequisites. Scale with confidence on Akamai Connected Cloud. With more distribution, reliability, and visibility, Akamai Connected Cloud puts applications closer to your users and keeps threats farther away. Our AWS cloud platform can accommodate all the needs of your enterprise. All you need is a domain in addition to a hosted zone, and within 35 minutes of running a single command, you'll have a secure EKS infrastructure management and application delivery platform. The perfect cloud environment when Kubernetes will be the center of attention. A simple cloud footprint with a powerful open source cloud native tool set for identity and infrastructure management, application delivery, and secrets management. Cloud native infrastructure with incredibly fast provisioning times. Whatever your visiona SaaS app, a website, an eCommerce storebuild it here using DigitalOcean's simple, cost-effective cloud hosting services. Google Cloud Platform, offered by Google, is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, Google Drive, and YouTube. The certified Kubernetes distribution built for IoT & Edge computing A cloud hosting provider that offers high-performance SSD-based cloud servers, block storage, object storage, and dedicated servers in multiple locations worldwide." } ]
{ "category": "Provisioning", "file_name": "docs.kubefirst.com#what-is-kubefirst.md", "project_name": "Kubefirst", "subcategory": "Automation & Configuration" }
[ { "data": "kubefirst is a free, fully automated, and instantly operational open source platform that includes some of the most popular open source tools available in the Kubernetes space, all working together in a click. By running our installer in your cloud, you'll get a GitOps cloud management and application delivery ecosystem complete with automated Terraform workflows, Vault secrets management, GitLab or GitHub integrations with Argo, and a demo application that demonstrates how it all pieces together. The fastest way to explore the kubefirst platform! With kubefirst k3d, you can explore some of the best parts of the kubefirst platform running for free on a local k3d cluster in 5 minutes - without any cloud costs or domain prerequisites. Scale with confidence on Akamai Connected Cloud. With more distribution, reliability, and visibility, Akamai Connected Cloud puts applications closer to your users and keeps threats farther away. Our AWS cloud platform can accommodate all the needs of your enterprise. All you need is a domain in addition to a hosted zone, and within 35 minutes of running a single command, you'll have a secure EKS infrastructure management and application delivery platform. The perfect cloud environment when Kubernetes will be the center of attention. A simple cloud footprint with a powerful open source cloud native tool set for identity and infrastructure management, application delivery, and secrets management. Cloud native infrastructure with incredibly fast provisioning times. Whatever your visiona SaaS app, a website, an eCommerce storebuild it here using DigitalOcean's simple, cost-effective cloud hosting services. Google Cloud Platform, offered by Google, is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, Google Drive, and YouTube. A cloud hosting provider that offers high-performance SSD-based cloud servers, block storage, object storage, and dedicated servers in multiple locations worldwide." } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "LinuxKit", "subcategory": "Automation & Configuration" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.kubefirst.com#__docusaurus_skipToContent_fallback.md", "project_name": "Kubefirst", "subcategory": "Automation & Configuration" }
[ { "data": "kubefirst is a free, fully automated, and instantly operational open source platform that includes some of the most popular open source tools available in the Kubernetes space, all working together in a click. By running our installer in your cloud, you'll get a GitOps cloud management and application delivery ecosystem complete with automated Terraform workflows, Vault secrets management, GitLab or GitHub integrations with Argo, and a demo application that demonstrates how it all pieces together. The fastest way to explore the kubefirst platform! With kubefirst k3d, you can explore some of the best parts of the kubefirst platform running for free on a local k3d cluster in 5 minutes - without any cloud costs or domain prerequisites. Scale with confidence on Akamai Connected Cloud. With more distribution, reliability, and visibility, Akamai Connected Cloud puts applications closer to your users and keeps threats farther away. Our AWS cloud platform can accommodate all the needs of your enterprise. All you need is a domain in addition to a hosted zone, and within 35 minutes of running a single command, you'll have a secure EKS infrastructure management and application delivery platform. The perfect cloud environment when Kubernetes will be the center of attention. A simple cloud footprint with a powerful open source cloud native tool set for identity and infrastructure management, application delivery, and secrets management. Cloud native infrastructure with incredibly fast provisioning times. Whatever your visiona SaaS app, a website, an eCommerce storebuild it here using DigitalOcean's simple, cost-effective cloud hosting services. Google Cloud Platform, offered by Google, is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, Google Drive, and YouTube. A cloud hosting provider that offers high-performance SSD-based cloud servers, block storage, object storage, and dedicated servers in multiple locations worldwide." } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "LinuxKit", "subcategory": "Automation & Configuration" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "LinuxKit", "subcategory": "Automation & Configuration" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "MAAS", "subcategory": "Automation & Configuration" }
[ { "data": "| Topic | Unnamed: 1 | Replies | Views | Activity | |:--|-:|-:|--:|:--| | MAAS Documentation Docs documentation MAAS is Metal As A Service, a service that treats physical servers like virtual machines (instances) in the cloud. No need to manage servers individually: MAAS turns bare metal into an elastic, cloud-like resource. Enli | nan | 5 | 27085 | 3 May 2023 | | About monitoring and logging Docs | nan | 0 | 217 | 16 April 2024 | | How to configure controllers Docs documentation | nan | 0 | 18 | 8 June 2024 | | About commissioning machines Docs | nan | 0 | 677 | 6 February 2024 | | IP Mode: DHCP as default? Docs | nan | 1 | 21 | 6 June 2024 | | MAAS equivalent of xcat groups for bulk commands? Docs | nan | 1 | 33 | 5 June 2024 | | How to commission machines with MAAS Docs | nan | 2 | 527 | 17 May 2024 | | Reference: Release notes MAAS 3.4 Docs | nan | 1 | 2457 | 9 May 2024 | | PPC64 Deployment Issues" }, { "data": "Docs | nan | 1 | 90 | 8 May 2024 | | How to install MAAS Docs documentation | nan | 22 | 7640 | 3 May 2024 | | How to manage VMFS datastores Docs | nan | 0 | 466 | 15 February 2024 | | How to manage machines Docs | nan | 0 | 562 | 5 February 2024 | | How to build a RHEL 7 image Docs | nan | 0 | 454 | 13 February 2024 | | How to deploy a FIPS-compliant kernel Docs | nan | 0 | 807 | 2 January 2024 | | How to create custom storage Docs | nan | 0 | 477 | 15 February 2024 | | About deploying machines Docs | nan | 0 | 634 | 16 February 2024 | | How to allocate machines with MAAS Docs | nan | 0 | 461 | 8 February 2024 | | About the OSI model Docs | nan | 0 | 661 | 5 February 2024 | | About machine basics Docs | nan | 0 | 597 | 20 February 2024 | | About LXD Docs | nan | 0 | 544 | 13 February 2024 | | About custom images Docs | nan | 0 | 610 | 16 February 2024 | | How to use resource pools Docs | nan | 0 | 561 | 5 February 2024 | | How to manage storage Docs | nan | 0 | 565 | 5 February 2024 | | How to manage partitions Docs | nan | 0 | 472 | 15 February 2024 | | How to manage block devices Docs | nan | 0 | 448 | 15 February 2024 | | How to deploy VMs on IBM Z Docs | nan | 0 | 533 | 13 February 2024 | | How to change MAAS settings Docs | nan | 0 | 315 | 25 March 2024 | | About the machine life-cycle Docs | nan | 0 | 625 | 5 February 2024 | | About machine customisation Docs | nan | 0 | 598 | 16 February 2024 | | About deploying running machines Docs | nan | 0 | 572 | 16 February 2024 | MAAS is Metal As A Service, a service that treats physical servers like virtual machines (instances) in the cloud. No need to manage servers individually: MAAS turns bare metal into an elastic, cloud-like resource. Enli Powered by Discourse, best viewed with JavaScript enabled" } ]
{ "category": "Provisioning", "file_name": "cloud.md", "project_name": "ManageIQ", "subcategory": "Automation & Configuration" }
[ { "data": "You can try ManageIQ in one of the public clouds that are supported. The benefits of this option are that you dont need any hardware yourself, and that you can also use the same public cloud as the platform to be managed. In the instructions below we will use the Google Cloud Platform. The ManageIQ project publishes ready-to-use images on Google Storage. We will assume that you have a Google account with an active payment method or a free trial. You also need to make sure that you have a default keypair installed. The ManageIQ project is working on making it easy to try out ManageIQ on other clouds as well. From console.cloud.google.com, go to Compute Engine, Images and then click on Create Image: Fill in the following data: Name: manageiq-petrosian-1 Family: centos-7 Source: cloud storage file Cloud storage file: manageiq/petrosian-1.tar.gz Once the image is created, you can create a new instance based on this image. Go to Compute Engine, VM instances and then click on Create instance. Its recommended to select the 2 CPU / 7.5GB instance. Under boot disk, select the image that you created above. You also need to make sure that HTTP traffic is enabled. Now hit Create to start the instance. ManageIQ is now up and running. Next step is to perform some basic configuration." } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "LinuxKit", "subcategory": "Automation & Configuration" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docker.md", "project_name": "ManageIQ", "subcategory": "Automation & Configuration" }
[ { "data": "You can test ManageIQ running in a Docker container using the images that the ManageIQ project makes available on the Docker Hub. This is a great option if you have a Linux PC (but it works everywhere Docker is available). If you are on Linux, make sure the Docker service is running: ``` $ sudo systemctl start docker ``` Pull the ManageIQ docker image: ``` $ docker pull manageiq/manageiq:petrosian-1 ``` ``` $ docker run -d -p 8443:443 manageiq/manageiq:petrosian-1 ``` ManageIQ is now up and running. Next step is to perform some basic configuration." } ]
{ "category": "Provisioning", "file_name": "vagrant.md", "project_name": "ManageIQ", "subcategory": "Automation & Configuration" }
[ { "data": "If you are new to ManageIQ, read this first to get an overview of the concepts, and try ManageIQ easily with walkthroughs to configure, add a provider, and provision your first instance. Get a more in-depth look at the usage of ManageIQ, with installation instructions for different environments, and details on how to administer, authenticate, and integrate with the ManageIQ management engine. The Automate feature in ManageIQ is a huge topic in itself, so Peter McGowan wrote a book on it. This tutorial style book will guide you through the steps of doing something in automate, with a lot of code samples. In this guide you will find a detailed overview of REST API, reference material, appendices, and plenty of examples. The guides provide information on how to integrate ManageIQ with external applications. It details the specification of the ManageIQ REST API, which is implemented as standard REST HTTP requests and responses of content type JSON. Interested in developing ManageIQ and extending its features? With this set of documentation, you will set up your development environment, learn about the architecture, coding styles and standards to get you contributing in no time!" } ]
{ "category": "Provisioning", "file_name": "developers.html.md", "project_name": "OpenStack", "subcategory": "Automation & Configuration" }
[ { "data": "The goal of this document is to walk you through the concepts and specifics that should be understood while contributing to projects hosted in the OpenDev infrastructure. The steps necessary to create a Gerrit account are described in Getting Started. Only extra recommended steps will be described here. While development on OpenDev only requires an account on the OpenDev Gerrit Code Review System, effective development in OpenDev hosted projects often requires interacting with other developers in IRC channels on OFTC. It is recommended to start by getting set up on IRC so that one can ask questions if one encounters issues with other phases of account setup. If you do not know how to connect to OFTC, the Connecting to OFTC document will help. If youre going to be interacting with others through OFTC on a regular basis, its also recommended you Register your IRC Nick since its how theyll know youre you. This will allow you to reclaim your name later if someone starts using it while youre not connected, or even (with some additional settings) preventing anyone else from using it at all. You can find the OpenDev community in the #opendev IRC channel on OFTC. For further information about the use of IRC in OpenStack, see IRC Guide. Projects that are part of OpenStack project require signing the Individual Contributor License Agreement, see these detailed instructions. Careful users may wish to verify the SSH Host key fingerprints for the Gerrit service the first time they connect from a new system. Depending on which key types your client is configured to negotiate, you may see some or all of these listed: ``` 256 SHA256:/aPoKpg+804wdezs21L9djZ4bOsLudpGF7m7779XVuQ [review.opendev.org]:29418 (ECDSA) 2048 SHA256:RXNl/GKyDaKiIQ93BoDvrNSKUPFvA1PNeAO9QiirYZU [review.opendev.org]:29418 (RSA) 256 SHA256:lHsyuBxtcAiZeJM+viHllq52he9JNPqg8FFKv5+/BJ8 [review.opendev.org]:29418 (ED25519) ``` Git-review normally communicates with Gerrit using SSH over port 29418 with no further configuration needed. However, if you suspect that ssh over non-standards ports might be blocked (or you need to access the web using https) then you can configure git-review to use an https endpoint instead of ssh. Keep in mind that you will need to generate an HTTP password in Gerrit to use this connection. You should run the following command before git review -s: ``` git remote add gerrit https://<username>@review.opendev.org/<umbrella repository name>/<repository name>.git ``` In case you had already tried to setup git-review and it failed, it might be necessary to remove the Gerrit remote from git: ``` git remote rm gerrit ``` Bug reports for a project are generally tracked on Launchpad at https://bugs.launchpad.net/<projectname>, or on StoryBoard ( https://storyboard.openstack.org). Contributors may review these reports regularly when looking for work to complete. There are 4 key tasks with regards to bugs that anyone can do: Confirm new bugs: When a bug is filed, it is set to the New status. A New bug can be marked Confirmed once it has been reproduced and is thus confirmed as genuine. Solve inconsistencies: Make sure bugs are Confirmed, and if assigned that they are marked In Progress Review incomplete bugs: See if information that caused them to be marked Incomplete has been provided, determine if more information is required and provide reminders to the bug reporter if they havent responded after 2-4" }, { "data": "Review stale In Progress bugs: Work with assignee of bugs to determine if the bug is still being worked on, if not, unassign them and mark them back to Confirmed or Triaged. Learn more about working with bugs for various projects at: https://wiki.openstack.org/wiki/BugTriage Bug statuses are documented here: https://docs.openstack.org/project-team-guide/bugs.html If you find a bug that you wish to work on, you may assign it to yourself. When you upload a review, include the bug in the commit message for automatic updates back to Launchpad or StoryBoard. The following options are available for Launchpad: ``` Closes-Bug: ####### Partial-Bug: ####### Related-Bug: ####### ``` and for StoryBoard: ``` Task: ###### Story: ###### ``` Mentioning the story will create a handy link to the story from gerrit, and link to the gerrit patch in StoryBoard. Mentioning the task will change the task status in StoryBoard to review while the patch is in review, and then merged once the patch is merged. When all tasks in a story are marked merged, the story will automatically change status from active to merged. If the patch is abandoned, the task status will change back to todo. Its currently best to note both story and task so that the task status will update and people will be able to find the related story. Also see the Including external references section of the OpenStack Git Commit Good Practices wiki page. Many OpenStack project teams have a <projectteam>-specs repository which is used to hold approved design specifications for additions and changes to the project teams code repositories. The layout of the repository will typically be something like: ``` specs/<release>/ ``` It may also have subdirectories to make clear which specifications are approved and which have already been implemented: ``` specs/<release>/approved specs/<release>/implemented ``` You can typically find an example spec in specs/template.rst. Check the repository for the project team youre working on for specifics about repository organization. Specifications are proposed for a given release by adding them to the specs/<release> directory and posting it for review. The implementation status of a blueprint for a given release can be found by looking at the blueprint in Launchpad. Not all approved blueprints will get fully implemented. Specifications have to be re-proposed for every release. The review may be quick, but even if something was previously approved, it should be re-reviewed to make sure it still makes sense as written. Historically, Launchpad blueprints were used to track the implementation of these significant features and changes in OpenStack. For many project teams, these Launchpad blueprints are still used for tracking the current status of a specification. For more information, see the Blueprints wiki page. View all approved project teams specifications at https://specs.openstack.org/. The Getting Started page explains how to originally clone and prepare a git repository. This only has to be done once, as you can reuse the cloned repository for multiple changes. Before creating your topic branch, just make sure you have the latest upstream changes: ``` git remote update git checkout master git pull --ff-only origin master ``` You may pick any name for your git branch" }, { "data": "By default, it will be reused as the topic for your change in Gerrit: ``` git checkout -b TOPIC-BRANCH ``` Best practices recommend, if you are working on a specific blueprint, to name your topic branch bp/BLUEPRINT where BLUEPRINT is the name of a blueprint in Launchpad (for example, bp/authentication). The general convention when working on bugs is to name the branch bug/BUG-NUMBER (for example, bug/1234567). If you want to use a different gerrit topic name from the git branch name, you can use the following command to submit your change: ``` git review -t TOPIC ``` Git commit messages should start with a short 50 character or less summary in a single paragraph. The following paragraph(s) should explain the change in more detail. If your changes addresses a blueprint or a bug, be sure to mention them in the commit message using the following syntax: ``` Implements: blueprint BLUEPRINT Closes-Bug: ####### (Partial-Bug or Related-Bug are options) ``` For example: ``` Adds keystone support ...Long multiline description of the change... Implements: blueprint authentication Closes-Bug: #123456 Change-Id: I4946a16d27f712ae2adf8441ce78e6c0bb0bb657 ``` Note that in most cases the Change-Id line should be automatically added by a Gerrit commit hook installed by git-review. If you already made the commit and the Change-Id was not added, do the Gerrit setup step and run: git commit --amend. The commit hook will automatically add the Change-Id when you finish amending the commit message, even if you dont actually make any changes. Do not change the Change-Id when amending a change as that will confuse Gerrit. Make your changes, commit them, and submit them for review: ``` git commit -a ``` Note Do not check in changes on your master branch. Doing so will cause merge commits when you pull new upstream changes, and merge commits will not be accepted by Gerrit. Projects may require the use of a Signed-off-by, and even if they do not, you are welcome to include Signed-off-by in your commits. By doing so, you are certifying that the following is true: ``` Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ``` A Signed-off-by header takes the following form in a commit message: ``` Signed-off-by: Full Name <email@example.com> ``` If you add the -s option to git commit, this header will be added automatically: ``` git commit -s ``` Before submitting your change, you should test" }, { "data": "To learn how to run python based unit tests in OpenStack projects see Running Python Unit Tests Before submitting your change, you should make sure that your change does not contain the files or lines you do not explicitly change: ``` git show ``` Once you have committed a change to your local repository, all you need to do to send it to Gerrit for code review is run: ``` git review ``` When that completes, automated tests will run on your change and other developers will peer review it. If the code review process suggests additional changes, make and amend the changes to the existing commit. Leave the existing Change-Id: footer in the commit message as-is. Gerrit knows that this is an updated patchset for an existing change: ``` git commit -a --amend git review ``` Its important to understand how Gerrit handles changes and patch sets. Gerrit combines the Change-Id in the commit message, the project, and the target branch to uniquely identify a change. A new patch set is determined by any modification in the commit hash. When a change is initially pushed up it only has one patch set. When an update is done for that change, git commit --amend will change the most current commits hash because it is essentially a new commit with the changes from the previous state combined with the new changes added. Since it has a new commit hash, once a git review is successfully processed, a new patch set appears in Gerrit. Since a patch set is determined by a modification in the commit hash, many git commands will cause new patch sets. Three common ones that do this are: git commit --amend git rebase git cherry-pick As long as you leave the Change-Id line in the commit message alone and continue to propose the change to the same target branch, Gerrit will continue to associate the new commit with the already existing change, so that reviewers are able to see how the change evolves in response to comments. If you have made many small commits, you should squash them so that they do not show up in the public repository. Remember: each commit becomes a change in Gerrit, and must be approved separately. If you are making one change to the project, squash your many checkpoint commits into one commit for public consumption. Heres how: ``` git checkout master git pull origin master git checkout TOPIC-BRANCH git rebase -i master ``` Use the editor to squash any commits that should not appear in the public history. If you want one change to be submitted to Gerrit, you should only have one pick line at the end of this process. After completing this, you can prepare your public commit message(s) in your editor. You start with the commit message from the commit that you picked, and it should have a Change-Id line in the message. Be sure to leave that Change-Id line in place when editing. Once the commit history in your branch looks correct, run git review to submit your changes to Gerrit. When you want to start new work that is based on the commit under the review, you can add the commit as a" }, { "data": "Fetch change under review and check out branch based on that change: ``` git review -d $PARENTCHANGENUMBER git checkout -b $DEVTOPICBRANCH ``` Edit files, add files to git: ``` git commit -a git review ``` Note git review rebases the existing change (the dependency) and the new commit if there is a conflict against the branch they are being proposed to. Typically this is desired behavior as merging cannot happen until these conflicts are resolved. If you dont want to deal with new patchsets in the existing change immediately you can pass the -R option to git review in the last step above to prevent rebasing. This requires future rebasing to resolve conflicts. If the commit your work depends on is updated, and you need to get the latest patchset from the depended commit, you can do the following. Fetch and checkout the parent change: ``` git review -d $PARENTCHANGENUMBER ``` Cherry-pick your commit on top of it: ``` git review -x $CHILDCHANGENUMBER ``` Submit rebased change for review: ``` git review ``` The note for the previous example applies here as well. Typically you want the rebase behavior in git review. If you would rather postpone resolving merge conflicts you can use git review -R as the last step above. Sometimes the target branch you are working on has changed, which can create a merge conflict with your patch. In this case, you need to rebase your commit on top of the current state of the branch. This rebase needs to be done manually: Checkout and update master: ``` $ git checkout master $ git remote update ``` Checkout the working branch and rebase on master: ``` $ git review -d 180503 $ git rebase origin/master ``` If git indicates there are merge conflicts, view the affected files: ``` $ git status ``` Edit the listed files to fix conflicts, then add the modified files: ``` $ git add <file1> <file2> <file3> ``` Confirm that all conflicts are resolved, then continue the rebase: ``` $ git status $ git rebase --continue ``` If your change has a dependency on a change outside of that repository, like a change for another repository or some manual setup, you have to ensure that the change merge at the right time. For a change depending on a manual setup, mark your change with the Work in Progress label until the manual setup is done. A core reviewer might also block an important change with a -2 so that it does not get merged accidentally before the manual setup is done. If your change has a dependency on a change in another repository, you can use cross-repo dependencies (CRD) in Zuul: To use them, include Depends-On: <gerrit-change-url> in the footer of your commit message. Use the permalink of the change. This is output by Gerrit when running git-review on the change, or you can find it in the top-left corner of the Gerrit web interface. Where it says Change ###### - Needs the number is the link to the change; you can copy and paste that URL. A patch can also depend on multiple changes as explained in Multiple Changes. These are one-way dependencies only do not create a" }, { "data": "When Zuul sees CRD changes, it serializes them in the usual manner when enqueuing them into a pipeline. This means that if change A depends on B, then when they are added to the gate pipeline, B will appear first and A will follow. If tests for B fail, both B and A will be removed from the pipeline, and it will not be possible for A to merge until B does. Note that if changes with CRD do not share a change queue (such as the integrated gate), then Zuul is unable to enqueue them together, and the first will be required to merge before the second is enqueued. When changes are enqueued into the check pipeline, all of the related dependencies (both normal git-dependencies that come from parent commits as well as CRD changes) appear in a dependency graph, as in the gate pipeline. This means that even in the check pipeline, your change will be tested with its dependency. So changes that were previously unable to be fully tested until a related change landed in a different repo may now be tested together from the start. All of the changes are still independent (so you will note that the whole pipeline does not share a graph as in the gate pipeline), but for each change tested, all of its dependencies are visually connected to it, and they are used to construct the git references that Zuul uses when testing. When looking at this graph on the Zuul status page, you will note that the dependencies show up as grey dots, while the actual change tested shows up as red or green. This is to indicate that the grey changes are only there to establish dependencies. Even if one of the dependencies is also being tested, it will show up as a grey dot when used as a dependency, but separately and additionally will appear as its own red or green dot for its test. A Gerrit URL refers to a single change on a single branch, so if your change depends on multiple changes, or the same change on multiple branches of a project, you will need to explicitly list each URL. Simply add another Depends-On: line to the footer for each additional change. If a cycle is created by use of CRD, Zuul will abort its work very early. There will be no message in Gerrit and no changes that are part of the cycle will be enqueued into any pipeline. This is to protect Zuul from infinite loops. The developers hope that they can improve this to at least leave a message in Gerrit in the future. But in the meantime, please be cognizant of this and do not create dependency cycles with Depends-On lines. Keep in mind that these dependencies are dependencies on changes in other repositories. Thus, a Depends-on only enforces an ordering but is not visible otherwise especially in these cases: Changes for the CI infrastructure like changes openstack/project-config are never tested in a production simulated environment. So, if one of the changes adjusts the job definitions or creates a new job, a Depends-On will not test the new definition, the CI infrastructure change needs to merge to master and be in production to be fully" }, { "data": "If a test job installs packages from PyPI and not via source, be aware that the package from PyPI will always be used, a Depends-On will not cause a modified package to be used instead of installing from PyPI. As an example, if you are testing a change in python-novaclient that needs a change in python-keystoneclient, you add a Depends-On in the python-novaclient change. If a python-novaclient job installs python-keystoneclient from PyPI, the Depends-On will not have any effect since the PyPI version is used. If a python-novaclient job installs python-keystoneclient from source, the checked out source will have the change applied. Do not add a Depends-On an abandoned change, your change will never merge. If you backport a change to another branch, it will recieve a new URL. If you need to additionally depend on the backported change, you will need to amend the commit message to add another Depends-On footer. A change that is dependent on another can be approved before the dependent change merges. If the repositories share the gate queue, it will merge automatically after the dependent change merged. But if the repositories do not share the gate queue, it will not merge automatically when the dependent change has merged, even a recheck will not help. Zuul waits for a status change and does not see it. The change needs another approval or a toggle of the approval, toggle means removing the approval and readding it again. Log in to https://review.opendev.org/ to see proposed changes, and review them. To provide a review for a proposed change in the Gerrit UI, click on the Reply... button. In the code review, you can add a message, as well as a vote (+1,0,-1). Its also possible to add comments to specific lines in the file, for giving context to the comment. For that look at the diff of changes done in the file (click the file name), and click on the line number for which you want to add the inline comment. After you add one or more inline comments, you still have to send the Review message (see above, with or without text and vote). Prior to sending the inline comments in a review comment the inline comments are stored as Drafts in your browser. Other reviewers can only see them after you have submitted them as a comment on the patchset. Any developer may propose or comment on a change (including voting +1/0/-1 on it). A vote of +2 is allowed from core reviewers, and should be used to indicate that they are a core reviewer and are leaving a vote that should be counted as such. Some OpenDev hosted projects, like many OpenStack project teams, have a policy requiring two positive reviews from core reviewers. When a review has enough +2 reviews and one of the core team believes it is ready to be merged, he or she should leave a +1 vote in the Workflow category. You may do so by clicking the Review button again, with or without changing your code review vote and optionally leaving a comment. When a +1 Approved review is received, Zuul will run tests on the change, and if they pass, it will be" }, { "data": "A green checkmark indicates that the review has met the requirement for that category. Under Code-Review, only one +2 gets the green check. For more details on reviews in Gerrit, check the Gerrit documentation. When a new patchset is uploaded to Gerrit, that projects check tests are run on the patchset by Zuul. Once completed the test results are reported to Gerrit by Zuul in the form of a Verified: +/-1 vote. After code reviews have been completed and a change receives an Approved: +1 vote that projects gate tests are run on the change by Zuul. Zuul reports the results of these tests back to Gerrit in the form of a Verified: +/-2 vote. Code merging will only occur after the gate tests have passed successfully and received a Verified: +2. You can view the state of tests currently being run on the Zuul Status page. If a change fails tests in Zuul, please follow the steps below: Zuul leaves a comment in the review with links to the log files for the test run. Follow those links and examine the output from the test. It will include a console log, and in the case of unit tests, HTML output from the test runner, or in the case of a devstack-gate test, it may contain quite a large number of system logs. Examine the console log or other relevant log files to determine the cause of the error. If it is related to your change, you should fix the problem and upload a new patchset. Do not use recheck. It is possible that the CI infrastructure may be having some issues which are causing your tests to fail. You can verify the status of the OpenDev infrastructure by doing the following: https://wiki.openstack.org/wiki/Infrastructure_Status @OpenStackInfra on Twitter. the topic in your projects IRC channel (or #opendev) Note If a job fails in the automated testing system with the status of POST_FAILURE rather than a normal FAILURE, it could either be that your tests resulted with the system under test losing network connectivity or an issue with the automated testing system. If you are seeing repeated POST_FAILURE reports with no indication of problems in the CI system, make sure that your tests are not impacting the network of the system. It may be the case that the problem is due to non-deterministic behavior unrelated to your change that has already merged. In this situation, you can help other developers and focus the attention of QA, CI, and developers working on a fix by performing the following steps: For OpenStack projects, check elastic-recheck to see whether the bug is already identified and if not, add it. If your error is not there, then: Identify which project or projects are affected, and search for a related bug on Launchpad. You can search for bugs affecting all OpenStack Projects here: https://bugs.launchpad.net/openstack/ If you do not find an existing bug, file a new one (be sure to include the error message and a link to the logs for the failure). If the problem is due to an infrastructure problem (such as Zuul or Gerrit), file (or search for) the bug against the openstack-gate project. It may also happen that the CI infrastructure somehow cannot finish a job and restarts" }, { "data": "If this happens several times, the job is marked as failed with a message of RETRY_LIMIT. Usually this means that network connectivity for the job was lost and the change itself causes the job node to become unreachable consistently. To re-run check or gate jobs, leave a comment on the review with the form recheck. A patchset has to be approved to run tests in the gate pipeline. If the patchset has failed in the gate pipeline (it will have been approved to get into the gate pipeline) a recheck will first run the check jobs and if those pass, it will again run the gate jobs. There is no way to only run the gate jobs, the check jobs will first be run again. More information on debugging automated testing failures can be found in the following recordings: Tales From The Gate Debugging Failures in the OpenStack Gate After patches land, jobs can be run in the post queue. Finding build logs for these jobs works a bit differently to the results of the pre-merge check and gate queues. For jobs in the post queue, logs are found via the builds tab of https://zuul.opendev.org/, for example to search for post jobs of the openstack tenant, go to the Zuul openstack build tab . Anyone can be a reviewer: participating in the review process is a great way to learn about social norms and the development processes. General review flow: Review is a conversation that works best when it flows back and forth. Submitters need to be responsive to questions asked in comments, even if the score is +0 from the reviewer. Likewise, reviewers should not use a negative score to elicit a response if they are not sure the patch should be changed before merging. For example, if there is a patch submitted which a reviewer cannot fully understand because there are changes that arent documented in the commit message or code documentation, this is a good time to issue a negative score. Patches need to be clear in their commit message and documentation. As a counter-example, a patch which is making use of a new library, which the reviewer has never used before, should not elicit a negative score from the reviewer with a question like Is this library using standard python sockets for communication? That is a question the reviewer can answer themselves, and which should not hold up the review process while the submitter explains things. Either the author or a reviewer should try to add a review comment answering such questions, unless they indicate a need to better extend the commit message, code comments, docstrings or accompanying documentation files. In almost all cases, a negative review should be accompanied by clear instructions for the submitter how they might fix the patch. There may be more specific items to be aware of inside the projects documentation for contributors. Contributors may notice a review that has several +1s from other reviewers, passes the functional tests, etc. but the code still has not been" }, { "data": "As only core contributors can approve code for merging, you can help things along by getting a core reviewers attention in IRC (never on the mailing lists) and letting them know there is a changeset with lots of positive reviews and needs final approval. To get early feedback on a change which is not fully finished yet, you can submit a change to Gerrit and mark it as Work in Progress (WIP). Note The OpenDev Gerrit system does not support drafts, use Work in Progress instead. Draft changes have been disabled because people assume they are private when they are not. They also create confusion if child changes are not drafts. Additionally it is difficult to run CI on them. It is better to assume changes are public and mark the not yet ready state. To do so, after submitting a change to Gerrit in usual way (git review), You should go to Gerrit, and do Code Review of your own change while setting Workflow vote to -1, which marks the change as WIP. This allows others to review the change, while at the same time blocking it from being merged, as you already plan to continue working on it. Note After uploading a new patchset, this -1 (WIP) vote disappears. So if you still plan to do additional changes, do not forget to set Workflow to -1 on the new patchset. Once a change has been approved and passed the gate jobs, Gerrit automatically merges the latest patchset. Each patchset gets merged to the head of the branch before testing it. If Gerrit cannot merge a patchset, it will give a -1 review and add a comment notifying of merge failure. Each time a change merges, the merge-check pipeline verifies that all open changes on the same project are still mergeable. If any job is not mergeable, Zuul will give a -1 review and add a comment notifying of merge failure. After a change is merged, project-specific post jobs are run. Most often the post jobs publish documentation, run coverage, or send strings to the translation server. Project gating refers to the process of running regression tests before a developers patchset is merged. The intent of running regression tests is to validate that new changes submitted against the source code repository will not introduce new bugs. Gating prevents regressions by ensuring that a series of tests pass successfully before allowing a patchset to be merged into the mainline of development. The system used for gating is Zuul, which listens to the Gerrit event stream and is configured with YAML files to define a series of tests to be run in response to an event. The jobs in the gate queue are executed once a core reviewer approves a change (using a +1 Workflow vote) and a verified +1 vote exist. When approving, at least one +2 Code-Review vote needs to exist (can be given by core reviewer when approving). The convention is that two +2 Code-Reviews are needed for approving. Once all of the jobs report success on an approved patchset in the configured gate pipeline, then Gerrit will merge the code into trunk. Besides running the gate tests, the gate pipeline determines the order of changes to merge across multiple projects. The changes are tested and merged in this order, so that for each change the state of all other repositories can be identified. Additional information about project gating and Zuul can be found in the Zuul documentation, located at: https://zuul-ci.org/docs/zuul/user/gating.html" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "OpenStack", "subcategory": "Automation & Configuration" }
[ { "data": "What is OpenStack? OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. This is the latest release. Use the top menu to select a prior release if needed. New features, upgrade and deprecation notes, known issues, and bug fixes Getting started with the most commonly used OpenStack services Choose how to deploy OpenStack and get started with the most commonly used OpenStack services Manage and troubleshoot an OpenStack cloud Install and configure OpenStack for high availability Plan and design an OpenStack cloud Operate an OpenStack cloud Guidelines and scenarios for creating more secure OpenStack clouds Obtain, create, and modify OpenStack-compatible virtual machine images Installation and configuration options for OpenStack OpenStack API Documentation Create and manage resources using the OpenStack dashboard, command-line client, and Python SDK Resources for application development on OpenStack clouds Documentation for OpenStack services and libraries Documentation for the OpenStack Python bindings and clients The Extended Maintenance SIG manages the existing stable branches Self-healing use cases and implementation details The journey of running OpenStack at large scale The contribution process explained Documentation workflow and conventions OpenStack Technical Committee reference documents and official resolutions Specifications for future project features Guide to the OpenStack project and community Community-managed development and communication systems Internationalization workflow and conventions How to join the Open Infrastructure Foundation Influence the future of OpenStack Resources for the OpenStack Upstream Training program Documentation treated like code, powered by the community - interested? Currently viewing which is the current supported release. The OpenStack project is provided under the Apache 2.0 license. Openstack.org is powered by VEXXHOST ." } ]
{ "category": "Provisioning", "file_name": "contributing.md", "project_name": "Meshery", "subcategory": "Automation & Configuration" }
[ { "data": "Please do! Thanks for your help! Meshery is community-built and welcomes collaboration. Contributors are expected to adhere to the CNCFs Code of Conduct. Follow these steps and youll be right at home. See the Newcomers Guide for how, where, and why to contribute. Sign up for a MeshMate to find the perfect Mentor to help you explore the Layer5 projects and find your place in the community: To contribute to Meshery, from creating a fork to creating pull request, please follow the basic fork-and-pull request workflow described here. ``` Signed-off-by: Jane Smith <jane.smith@example.com> ``` ``` $ git commit -s -m my commit message w/signoff``` ``` [alias] amend = commit -s --amend cm = commit -s -m commit = commit -s ``` Meshery is written in Go (Golang) and leverages Go Modules. UI is built on React and Next.js. To make building and packaging easier a Makefile is included in the main repository folder. Relevant coding style guidelines are the Go Code Review Comments and the Formatting and style section of Peter Bourgons Go: Best Practices for Production Environments. Please note: All make commands should be run in a terminal from within the Mesherys main folder." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "OpenYurt", "subcategory": "Automation & Configuration" }
[ { "data": "Welcome to the world of OpenYurt! OpenYurt is the first edge computing platform that is non-intrusive to cloud-native systems in the industry. It unifies the management of scattered massive edge heterogeneous resources (such as CDN sites, IoT all-in-one machines and other edge computing power) from the control side (located in the cloud or central server room, etc.). It helps users to easily complete large-scale application delivery, operation and maintenance, and control on massive edge resources. OpenYurt will continue to work on exploring cloud-native edge computing platform standards for cloud-edge-device collaboration, and will also combine community mainstream computing, networking, storage, application orchestration, security, AI, IoT and other projects or solutions to build a complete upstream and downstream ecology. Powerful edge autonomy capability In Kubernetes, normally if a node is disconnected from the apiserver, the running Pods cannot be recovered when the node hits failures. Moreover, pods on edge nodes will be evicted by native controllers of the Kube-Controller-Manager component when node heartbeat is not reported for more than 5m. This brings a significant challenge for the cloud-edge orchestration since the cloud-edge networking can be unreliable. As described in below Figure, OpenYurt introduces a per node proxy (YurtHub) and a local storage to cache cloud apiserver states, hence the cached states can be used by Kubelet, KubeProxy or user Pods if the node is disconnected. And with the help of the Pool-Coordinator component, Leader Yurthub in NodePool can be delegated to proxy node heartbeat for other edge nodes in this pool which are disconnected with cloud, so pods on edge nodes will not be evicted even if the network is disconnected. Cross NodePool network communication capability In the edge computing Kubernetes cluster, the nodes themselves may belong to different regions, so based on native CNI network solution, pods in different nodepools can not communicate with each other through Pod IP, Service IP, or Node IP if each nodepool is within an isolated LAN. Raven is an elegant network solution for providing cross-nodepool network communication capability in an OpenYurt cluster. A node daemon is installed for every node, and only one daemon in each nodepool is picked as gateway that sets up the VPN tunnel between nodepools, the other daemons in the nodepool configure the cross-nodepool network routing rules to ensure the traffic will go through the gateway node. Moreover, raven only hijacks cross nodepool traffic, and the traffic within nodepool still uses the native CNI network solution. Therefore, raven can work together with native CNI network solutions(such as flannel, calico, etc.) seamlessly. As described in below Figure, The cross-nodepool traffic is forwarded to gateway node and goes through VPN tunnel. Multi-NodePool management For better cloud-edge orchestration, OpenYurt pioneers the idea of managing a Pool, which encapsulates the management of node resources, applications, and workload traffic. The edge computing resources are usually classified and divided into nodepools according to their geographical" }, { "data": "In order to manage applications and traffic in multiple nodepools conveniently, There are several features are developed for nodepool as following: Advanced workload upgrade model In cloud-edge architecture, it is easy to get stuck during DaemonSet upgrade process if the number of NotReady nodes exceeds the maxUnavailable of RollingUpdate because cloud-edge network connection is unreliable. In another scenario, because edge nodes may belong to different users (such as new energy vehicles), end users expect that pods on nodes are not automatically upgraded, but that users themselves decide whether to start the pods upgrade on nodes. To address the above challenges, OpenYurt enhances the DaemonSet upgrade model and adds OTA(On-The-Air) and Auto Upgrade models. Programmable resource access control As described in below Figure, the programmable data filtering framework is built into the YurtHub component, and the response data from cloud will go through a chain of filters and non-perception and on-demand transformation of the response data will be carried out, so as to meet the specific requirements in the cloud-edge collaboration scenario. Four filters are supported in the filters chain at present as follows, and new filters can be easily added into the framework. Cloud-edge network bandwidth reduction A performance test has shown that in a large-scale OpenYurt cluster, the cloud-edge traffic will increase rapidly if pods are deleted and recreated since the kube-proxy components on the edge nodes watch for all endpoints/endpointslices changes. Note that typically the same endpoints information will be transferred to edge nodes within a nodepool which can be less cost efficient considering that the cloud-edge networking traffic will use the public network. Leveraging the Pool-Coordinator mentioned above, OpenYurt proposes to introduce a notion of pool-scoped metadata which are unique within a nodepool such as the endpoints/endpointslices data. As described in below Figure, the leader Yurthub will read the pool-scoped data from the cloud kube-apiserver and update the load to pool-coordinator. As a result, all other YurtHubs will retrieve the pool-scoped data from the pool-coordinator, eliminating the use of public network bandwidth for retrieving such data from the cloud kube-apiserver. Cloud-native edge device management OpenYurt defines a set of APIs for managing edge devices through cloud Kubernetes controlplane. The APIs abstract the devices basic properties, main capabilities and the data that should be transmitted between the cloud and the edge. OpenYurt provides integration with mainstream OSS IoT device management solutions, such as EdgeXFoundry using the APIs. As described in below Figure, An instance of yurt-device-controller component and EdgeXFoundry service are deployed in each nodepool. Yurt-device-controller component can get the changes of Device CRD from cloud kube-apiserver and convert the desired spec of Device CRD to requests of EdgeXFoundry, then transmit the requests to EdgeXFoundry service in real-time. On the other hand, yurt-device-controller can subscribe to the device status from EdgeXFoundry service, and update the status of Device CRD when status is changed. Here are some recommended next steps:" } ]
{ "category": "Provisioning", "file_name": "summary.md", "project_name": "OpenYurt", "subcategory": "Automation & Configuration" }
[ { "data": "OpenYurt Cluster installation is divided into two parts: Install OpenYurt Control Plane components and join nodes. Some common problems you may encounter have been listed in the FAQ. We recommend users to install Control Plane Components manually at present. and other solutions will be supported in the later version. End users can join nodes into an OpenYurt cluster directly with yurtadm join command or install OpenYurt node components manually on the already joined node." } ]
{ "category": "Provisioning", "file_name": "setup-gerrit.html#individual-contributor-license-agreement.md", "project_name": "OpenStack", "subcategory": "Automation & Configuration" }
[ { "data": "[ English | English (United Kingdom) | (, ) | Indonesia | () | espaol (Mxico) | Deutsch ] Note This section assumes you have completed Setup and Learn GIT guide. This is the review system the OpenStack community uses. Gerrit allows you to review: Code, docs, infrastructure changes, and CI configurations Specifications Translations Use cases for features Visit OpenStacks Gerrit page and click the sign in link. You will be prompted to select a username. You can enter the same one you did for Launchpad, or something else. Note Choose and type your username carefully. Once it is set, you cannot change the username. Note From here on out when you sign into Gerrit, youll be prompted to enter your Launchpad login info. This is because Gerrit uses it as an OpenID single sign on. An agreement to clarify intellectual property rights granted with contributions from a person or entity. Preview the full agreement. In Gerrits settings click the New Contributor Agreement link and sign the agreement. You need this to contribute code & documentation. You will not be able to push patches to Gerrit without this. If you are contributing on behalf of a company or organization, please make sure that you sign the ICLA AND also get added to the list of contributors on your companys Corporate Contributor License Agreement (CCLA). You will need to complete both of these steps before being able to contribute. In Gerrits settings click the New Contributor Agreement link and sign the agreement. An employer with the appropriate signing rights of the company or organization needs to sign the Corporate Contributor License Agreement. If the CCLA only needs to be extended follow this procedure. Note Employers can update the list of authorized employees by filling out and signing an Updated Schedule A Form. Someone of authority needs to sign the U.S. Government Contributor License Agreement. Contact the Open Infrastructure Foundation to initiate this process. In order to push things to Gerrit we need to have a way to identify ourselves. We will do this using SSH keys which allows us to have our machine were pushing a change from to perform a challenge-response authentication with the Gerrit server. SSH keys are always generated in pairs: Private key - Only known to you and it should be safely guarded. Public key - Can be shared freely with any SSH server you wish to connect to. In summary, you will be generating a SSH key pair, and providing the Gerrit server with your public key. With your system holding the private key, it will have no problem replying to Gerrit during the challenge-response authentication. Some people choose to use one SSH key pair to access many systems while others prefer to use separate key pairs. Both options are covered in the following sections. Open your terminal program and type: ``` ls -la ~/.ssh ``` Typically public key filenames will look like: id_dsa.pub id_ecdsa.pub id_ed25519.pub id_rsa.pub If you dont see .pub extension file or want to generate a specific set for OpenStack Gerrit, you need to generate keys. Note This guide recommends using ed25519 keys because it has been found that this type works well across all operating systems. You can generate a new SSH key pair using the provided email as a label by going into your terminal program and typing: ``` ssh-keygen -t ed25519 -C \"your_email@example.com\" ``` When youre prompted to Enter a file in which to save the key press Enter. This accepts the default location: ``` Enter a file in which to save the key" }, { "data": "[Press enter] ``` At the prompt, type a secure passphrase, you may enter one or press Enter to have no passphrase: ``` Enter passphrase (empty for no passphrase): [Type a passphrase] Enter same passphrase again: [Type passphrase again] ``` You can generate a new SSH key using the provided email as a label by going into your terminal program and typing: ``` ssh-keygen -t ed25519 -C \"your_email@example.com\" ``` When youre prompted to Enter a file in which to save the key you must specify the name of the new key pair and then press Enter: ``` Enter a file in which to save the key (/Users/you/.ssh/ided25519): /Users/you/.ssh/idopenstack_ed25519 ``` At the prompt, type a secure passphrase, you may enter one or press Enter to have no passphrase: ``` Enter passphrase (empty for no passphrase): [Type a passphrase] Enter same passphrase again: [Type passphrase again] ``` Finally you need to tell ssh what host(s) to associate SSH keys with. To do this open ~/.ssh/config in an editor, create the file if it doesnt exist and add something like: ``` Host review.opendev.org review Hostname review.opendev.org Port 29418 User <yourgerritusername> IdentityFile ~/.ssh/idopenstacked25519 ``` From your terminal type: ``` cat ~/.ssh/id_ed25519.pub ``` Or if you created a separate key pair, assuming the example name above: ``` cat ~/.ssh/idopenstacked25519.pub ``` Highlight and copy the output. Go to Gerrits SSH Keys section in User Settings. Paste the public key into the New SSH Key text box. Click the ADD NEW SSH KEY button. Git review is a tool maintained by the OpenStack community. It adds an additional sub-command to git like so: ``` git review ``` When you have changes in an OpenStack project repository, you can use this sub-command to have the changes posted to Gerrit so that they can be reviewed. In a terminal type: ``` pip install git-review ``` If you dont have pip installed already, follow the installation documentation for pip. Note Mac OS X El Capitan and Mac OS Sierra users might see an error message like Operation not permitted when installing with the command. In this case, there are two options to successfully install git-review. Option 1: install using pip with more options: ``` pip install --install-option '--install-data=/usr/local' git-review ``` Option 2: Use the package manager Homebrew, and type in a terminal: ``` brew install git-review ``` For distributions like Debian, Ubuntu, or Mint open a terminal and type: ``` sudo apt install git-review ``` For distributions like RedHat, Fedora 21 or earlier, or CentOS open a terminal and type: ``` sudo yum install git-review ``` For Fedora 22 or later open a terminal and type: ``` sudo dnf install git-review ``` For SUSE distributions open a terminal and type: ``` sudo zypper in python-git-review ``` Git review assumes the user youre running it as is the same as your Gerrit username. If its not, you can tell it by setting this git config setting: ``` git config --global gitreview.username <username> ``` If you dont know what your Gerrit username is, you can check the Gerrit settings. Before doing git commit on your patch it is important to initialize git review. Use the following command to do the initial git review configuration in your repository: ``` git review -s ``` The command sets up the necessary remote hosts and commit hooks to enable pushing changes to Gerrit. Note Git reviews only needs to be initialized once in a repository. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Pulumi", "subcategory": "Automation & Configuration" }
[ { "data": "``` $ brew install pulumi/tap/pulumi ``` amd64 arm64 macOS Sierra (10.12) or later is required. The latest version of Pulumi is 3.119.0. For older versions, see Available Versions. For a list of features, bug fixes, and more see the CHANGELOG. Pulumi supports many clouds using the same languages, CLI, and deployment workflow. For a streamlined Pulumi walkthrough, including language runtime installation and cloud configuration, see the Get Started guides. ``` $ curl -fsSL https://get.pulumi.com | sh ``` amd64 The latest version of Pulumi is 3.119.0. For older versions, see Available Versions. For a list of features, bug fixes, and more see the CHANGELOG. Pulumi supports many clouds using the same languages, CLI, and deployment workflow. For a streamlined Pulumi walkthrough, including language runtime installation and cloud configuration, see the Get Started guides. amd64 amd64 Windows 8 and later are supported. The latest version of Pulumi is 3.119.0. For older versions, see Available Versions. For a list of features, bug fixes, and more see the CHANGELOG. Pulumi supports many clouds using the same languages, CLI, and deployment workflow. For a streamlined Pulumi walkthrough, including language runtime installation and cloud configuration, see the Get Started guides. In addition, there are many ways to install Pulumi: You can install Pulumi through the Homebrew package manager and using our official Pulumi Homebrew Tap ``` $ brew install pulumi/tap/pulumi ``` This will install the pulumi CLI to the usual place (often /usr/local/bin/pulumi) and add it to your path. Subsequent updates can be installed in the usual way: ``` $ brew upgrade pulumi ``` A Pulumi formula is available on the Community Homebrew. If you do not have the Pulumi tap installed, then you can still install Pulumi from homebrew using the command: ``` $ brew install pulumi ``` You can install Pulumi through the MacPorts package manager: ``` $ sudo port install pulumi ``` This will install the pulumi CLI to /opt/local/bin/pulumi and add it to your path. Subsequent updates can be installed through the upgrade outdated command: ``` $ sudo port upgrade outdated ``` Alternatively, you can run our installation script. ``` $ curl -fsSL https://get.pulumi.com | sh ``` This will install the pulumi CLI to ~/.pulumi/bin and add it to your path. When it cant automatically add pulumi to your path, you will be prompted to add it manually. See How to permanently set $PATH on Unix for guidance. The installer script can be rerun to subsequently install new" }, { "data": "If you do not wish to use the previous options, you can install Pulumi manually. To install, run our installation script: ``` $ curl -fsSL https://get.pulumi.com | sh ``` This will install the pulumi CLI to ~/.pulumi/bin and add it to your path. When it cant automatically add pulumi to your path, you will be prompted to add it manually. See How to permanently set $PATH on Unix for guidance. Alternatively, you can install Pulumi manually. We provide a prebuilt binary for Linux. You can install Pulumi using elevated permissions through the Chocolatey package manager: ``` choco install pulumi ``` This will install the pulumi CLI to the usual place (often $($env:ChocolateyInstall)\\lib\\pulumi) and generate the shims (usually $($env:ChocolateyInstall)\\bin) to add Pulumi your path. Subsequent updates can be installed in the usual way: ``` choco upgrade pulumi ``` Install Pulumi using the Windows Package Manager winget CLI. This is built-in on Windows 11 and later. ``` winget install pulumi ``` To update Pulumi to a more recent version: ``` winget upgrade pulumi ``` Download the latest Pulumi Installer for Windows x64 and run it like any other installer. It will automatically add Pulumi to the path and make it available machine-wide. Open a new command prompt window (WIN+R: cmd.exe): Run our installation script: ``` @\"%SystemRoot%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command \"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; iex ((New-Object System.Net.WebClient).DownloadString('https://get.pulumi.com/install.ps1'))\" && SET \"PATH=%PATH%;%USERPROFILE%\\.pulumi\\bin\" ``` This will install the pulumi.exe CLI to %USERPROFILE%\\.pulumi\\bin and add it to your path. Alternatively, you can install Pulumi manually using binaries built for Windows x64. Unzip the file and extract the contents to a folder such as C:\\pulumi. Add C:\\pulumi\\bin to your path via System Properties -> Advanced -> Environment Variables -> User Variables -> Path -> Edit. After installing Pulumi, verify everything is in working order by running the pulumi CLI: ``` $ pulumi version v3.119.0 ``` ``` $ pulumi version v3.119.0 ``` ``` pulumi version v3.119.0 ``` These are common installation-related errors or warnings you may encounter. If you get an error that pulumi could not be found, it means your path has not been configured correctly. Verify that your systems $PATH contains the directory containing the pulumi CLI installed earlier. If a new version of Pulumi is available, the CLI produces the following example warning when running any of the available commands: ``` warning: A new version of Pulumi is available. To upgrade from version '2.17.26' to '3.119.0', run $ curl -sSL https://get.pulumi.com | sh or visit https://pulumi.com/docs/reference/install/ for manual instructions and release" }, { "data": "``` ``` warning: A new version of Pulumi is available. To upgrade from version '2.17.26' to '3.119.0', run $ curl -sSL https://get.pulumi.com | sh or visit https://pulumi.com/docs/reference/install/ for manual instructions and release notes. ``` ``` warning: A new version of Pulumi is available. To upgrade from version '2.17.26' to '3.119.0', run > \"%SystemRoot%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command \"iex ((New-Object System.Net.WebClient).DownloadString('https://get.pulumi.com/install.ps1'))\" or visit https://pulumi.com/docs/reference/install/ for manual instructions and release notes. ``` If you're in an environment with no internet access, you may skip the Pulumi version update check by setting the environment variable PULUMISKIPUPDATE_CHECK to 1 or true. If you are upgrading from Pulumi 2.0 to 3.0, please see our migration guide. Most installation methods choose the latest version by default. To install a specific version, use the following commands. You can find the list of versions on the Available Versions page. ``` $ curl -fsSL https://get.pulumi.com | sh -s -- --version <version> ``` To install, run our installation script: ``` $ curl -fsSL https://get.pulumi.com | sh -s -- --version <version> ``` You can specify a specific version with Chocolatey package manager: ``` choco install pulumi --version <version> ``` Open a new command prompt window (WIN+R: cmd.exe): Run our installation script (replace <version> with the version number): ``` @\"%SystemRoot%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command \"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $version = '<version>'; iex ((New-Object System.Net.WebClient).DownloadString('https://get.pulumi.com/install.ps1')).Replace('${Version}', $version)\" && SET \"PATH=%PATH%;%USERPROFILE%\\.pulumi\\bin\" ``` In addition to installing a specific version, the latest dev version can also be installed automatically. This version contains the latest changes that have been merged to the main development branch. ``` $ curl -fsSL https://get.pulumi.com | sh -s -- --version dev ``` To install, run our installation script: ``` $ curl -fsSL https://get.pulumi.com | sh -s -- --version dev ``` Open a new command prompt window (WIN+R: cmd.exe): Run our installation script: ``` @\"%SystemRoot%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command \"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; iex ((New-Object System.Net.WebClient).DownloadString('https://get.pulumi.com/install.ps1')) -version dev\" && SET \"PATH=%PATH%;%USERPROFILE%\\.pulumi\\bin\" ``` To uninstall Pulumi, use your installation methods command of choice. If you installed Pulumi manually, delete the pulumi directory that you created. Afterwards, remove the .pulumi folder from your home directory which contains plugins and other cached metadata. Was this page helpful? Thank you for your feedback! If you have a question about how to use Pulumi, reach out in Community Slack. Open an issue on GitHub to report a problem or suggest an improvement. Feedback Thank you for your feedback! If you would like to provide additional feedback, please let us know your thoughts below. Pulumi is open source Pulumi 2024" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Salt Project", "subcategory": "Automation & Configuration" }
[ { "data": "Install Salt Post-installation Upgrade Salt Support Contribute Versions The Salt install guide New to Salt? Try this tutorial Supported Salt releases 3007.1 3006.8 See also Welcome to the Salt install guide! This guide provides instructions for installing Salt on Salt supported operating systems. It also explains how to configure Salt, start Salt services, and verify your installation. Note that the Salt Project has phased out classic package builds for supported operating systems for 3006 and later. Update your Salt infrastructure to the new onedir packages as soon as possible. See Upgrade to onedir for instructions. Using the standard installation method is recommended for most organizations, especially if you are just starting out with Salt. The standard installation will make using Salt easier and provides functionality that isnt available in masterless/agentless Salt configurations. | Unnamed: 0 | Process | For more information | |-:|:|:--| | 1 | Before you start the installation, check the system requirements to ensure your platform is supported in the latest version of Salt and open the required network ports. Ensure you also have the correct permissions to install packages on the targeted nodes. | Check system requirements Check your network ports Check your permissions Salt supported operating systems Salt version support lifecycle Support for Python versions | | 2 | Install the salt-master service on the node that will manage your other nodes, meaning it will send commands to other nodes. Then, install the salt-minion service on the nodes that will be managed by the Salt master. For Linux-based operating systems, the recommended installation method is to use the bootstrap script or you can manually install Salt using the instructions for each operating system. For Windows or macOS operating systems, you need to download and run the installer file for that system. | For Linux-based systems: Bootstrap installation Manual install directions by operating system For macOS or Windows: macOS Windows For all operating systems: Manual install directions by operating system | | 3 | Configure the Salt minions to add the DNS/hostname or IP address of the Salt master they will connect to. You can add additional configurations to the master and minions as" }, { "data": "| Configure the Salt master and minions Configuring the minion | | 4 | Start the service on the master, then the minions. | Start the master and minion services | | 5 | Accept the minion keys after the minion connects. | Accept the minion keys | | 6 | Verify that the installation was successful by sending a test ping. | Verify a Salt install | | 7 | Install third-party Python dependencies needed for specific modules. | Install dependencies | Install the salt-master service on the node that will manage your other nodes, meaning it will send commands to other nodes. Then, install the salt-minion service on the nodes that will be managed by the Salt master. For Linux-based operating systems, the recommended installation method is to use the bootstrap script or you can manually install Salt using the instructions for each operating system. For Windows or macOS operating systems, you need to download and run the installer file for that system. In general, you should only use alternative installation and configuration options if you are an intermediate or advanced Salt user. Although the standard Salt configuration model is the master/minion (master/client) model, minions do not necessarily have to have a master to be managed. Salt also gives additional options for managing minions: | Type | Description | For more information | |:--|:|:-| | Masterless | Running a masterless salt-minion lets you use Salts configuration management for a single machine without calling out to a Salt master on another machine. | Salt masterless quickstart | | Salt cloud | Provisions and manages systems on cloud hosts or hypervisors. It uses the Saltify drive to install Salt on existing machines (virtual or bare metal). | Salt cloud Getting started with Saltify | | Proxy minions | Send and receive commands from minions that, for whatever reason, cant run the standard salt-minion service. | Proxy minions | | Agentless | Use SSH to run Salt commands on a minion without installing an agent. | Salt SSH | | Install Salt for development | If you plan to contribute to the Salt codebase, use this installation method. | Installing Salt for development |" } ]
{ "category": "Provisioning", "file_name": "contents.html.md", "project_name": "Salt Project", "subcategory": "Automation & Configuration" }
[ { "data": "Note Welcome to Salt Project! I am excited that you are interested in Salt and starting down the path to better infrastructure management. I developed (and am continuing to develop) Salt with the goal of making the best software available to manage computers of almost any kind. I hope you enjoy working with Salt and that the software can solve your real world needs! Thomas S Hatch Salt Project creator and Chief Developer of Salt Project Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure. The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems. On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States. SaltStack has been made to be very easy to install and get started. The Salt install guide provides instructions for all supported platforms. Salt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called minions. The minions connect back to the master. Turning on the Salt Master is easy -- just turn it on! The default configuration is suitable for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager: On Systemd based platforms (newer Debian, openSUSE, Fedora): ``` systemctl start salt-master ``` On Upstart based systems (Ubuntu, older Fedora/RHEL): ``` service salt-master start ``` On SysV Init systems (Gentoo, older Debian etc.): ``` /etc/init.d/salt-master start ``` Alternatively, the Master can be started directly on the command-line: ``` salt-master -d ``` The Salt Master can also be started in the foreground in debug mode, thus greatly increasing the command output: ``` salt-master -l debug ``` The Salt Master needs to bind to two TCP network ports on the system. These ports are 4505 and 4506. For more in depth information on firewalling these ports, the firewall tutorial is available here. When a minion starts, by default it searches for a system that resolves to the salt hostname on the network. If found, the minion initiates the handshake and key authentication process with the Salt master. This means that the easiest configuration approach is to set internal DNS to resolve the name salt back to the Salt Master IP. Otherwise, the minion configuration file will need to be edited so that the configuration option master points to the DNS name or the IP of the Salt Master: Note The default location of the configuration files is /etc/salt. Most platforms adhere to this convention, but platforms such as FreeBSD and Microsoft Windows place this file in different locations. /etc/salt/minion: ``` master: saltmaster.example.com ``` Note The Salt Minion can operate with or without a Salt" }, { "data": "This walk-through assumes that the minion will be connected to the master, for information on how to run a master-less minion please see the master-less quick-start guide: Masterless Minion Quickstart Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly: As a daemon: ``` salt-minion -d ``` In the foreground in debug mode: ``` salt-minion -l debug ``` When the minion is started, it will generate an id value, unless it has been generated on a previous run and cached (in /etc/salt/minion_id by default). This is the name by which the minion will attempt to authenticate to the master. The following steps are attempted, in order to try to find a value that is not localhost: The Python function socket.getfqdn() is run /etc/hostname is checked (non-Windows only) /etc/hosts (%WINDIR%\\system32\\drivers\\etc\\hosts on Windows hosts) is checked for hostnames that map to anything within 127.0.0.0/8. If none of the above are able to produce an id which is not localhost, then a sorted list of IP addresses on the minion (excluding any within 127.0.0.0/8) is inspected. The first publicly-routable IP address is used, if there is one. Otherwise, the first privately-routable IP address is used. If all else fails, then localhost is used as a fallback. Note Overriding the id The minion id can be manually specified using the id parameter in the minion config file. If this configuration value is specified, it will override all other sources for the id. Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion's public key. Salt authenticates minions using public-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the master. The salt-key command is used to manage all of the keys on the master. To list the keys that are on the master: ``` salt-key -L ``` The keys that have been rejected, accepted, and pending acceptance are listed. The easiest way to accept the minion key is to accept all pending keys: ``` salt-key -A ``` Note Keys should be verified! Print the master key fingerprint by running salt-key -F master on the Salt master. Copy the master.pub fingerprint from the Local Keys section, and then set this value as the master_finger in the minion configuration file. Restart the Salt minion. On the master, run salt-key -f minion-id to print the fingerprint of the minion's public key that was received by the master. On the minion, run salt-call key.finger --local to print the fingerprint of the minion key. On the master: ``` Unaccepted Keys: foo.domain.com: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9 ``` On the minion: ``` local: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9 ``` If they match, approve the key with salt-key -a foo.domain.com. Now that the minion is connected to the master and authenticated, the master can start to command the minion. Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution. The salt command is comprised of command options, target specification, the function to execute, and arguments to the function. A simple command to start with looks like this: ``` salt '*' test.version ``` The * is the target, which specifies all minions. test.version tells the minion to run the test.version function. In the case of test.version, test refers to a execution module. version refers to the version function contained in the aforementioned test module. Note Execution modules are the workhorses of Salt. They do the work on the system to perform various tasks, such as manipulating files and restarting services. The result of running this command will be the master instructing all of the minions to execute test.version in parallel and return the result. Using test.version is a good way of confirming that a minion is connected, and reaffirm to the user the salt version(s) they have installed on the" }, { "data": "Note Each minion registers itself with a unique minion ID. This ID defaults to the minion's hostname, but can be explicitly defined in the minion config as well by using the id parameter. Of course, there are hundreds of other modules that can be called just as test.version can. For example, the following would return disk usage on all targeted minions: ``` salt '*' disk.usage ``` Salt comes with a vast library of functions available for execution, and Salt functions are self-documenting. To see what functions are available on the minions execute the sys.doc function: ``` salt '*' sys.doc ``` This will display a very large list of available functions and documentation on them. Note Module documentation is also available on the web. These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt. Note Salt comes with many plugin systems. The functions that are available via the salt command are called Execution Modules. The cmd module contains functions to shell out on minions, such as cmd.run and cmd.run_all: ``` salt '*' cmd.run 'ls -l /etc' ``` The pkg functions automatically map local system package managers to the same salt functions. This means that pkg.install will install packages via yum on Red Hat based systems, apt on Debian systems, etc.: ``` salt '*' pkg.install vim ``` Note Some custom Linux spins and derivatives of other distributions are not properly detected by Salt. If the above command returns an error message saying that pkg.install is not available, then you may need to override the pkg provider. This process is explained here. The network.interfaces function will list all interfaces on a minion, along with their IP addresses, netmasks, MAC addresses, etc: ``` salt '*' network.interfaces ``` The default output format used for most Salt commands is called the nested outputter, but there are several other outputters that can be used to change the way the output is displayed. For instance, the pprint outputter can be used to display the return data using Python's pprint module: ``` root@saltmaster:~# salt myminion grains.item pythonpath --out=pprint {'myminion': {'pythonpath': ['/usr/lib64/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/site-packages', '/usr/lib/python2.7/site-packages/gst-0.10', '/usr/lib/python2.7/site-packages/gtk-2.0']}} ``` The full list of Salt outputters, as well as example output, can be found here. The examples so far have described running commands from the Master using the salt command, but when troubleshooting it can be more beneficial to login to the minion directly and use salt-call. Doing so allows you to see the minion log messages specific to the command you are running (which are not part of the return data you see when running the command from the Master using salt), making it unnecessary to tail the minion log. More information on salt-call and how to use it can be found here. Salt uses a system called Grains to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users. Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing. A common practice is to assign grains to minions to specify what the role or roles a minion might be. These static grains can be set in the minion configuration file or via the grains.setval" }, { "data": "Salt allows for minions to be targeted based on a wide range of criteria. The default targeting system uses globular expressions to match minions, hence if there are minions named larry1, larry2, curly1, and curly2, a glob of larry will match larry1 and larry2, and a glob of 1 will match larry1 and curly1. Many other targeting systems can be used other than globs, these systems include: Target using PCRE-compliant regular expressions Target based on grains data: Targeting with Grains Target based on pillar data: Targeting with Pillar Target based on IP address/subnet/range Create logic to target based on multiple targets: Targeting with Compound Target with nodegroups: Targeting with Nodegroup The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions. Many of the functions available accept arguments which can be passed in on the command line: ``` salt '*' pkg.install vim ``` This example passes the argument vim to the pkg.install function. Since many functions can accept more complex input than just a string, the arguments are parsed through YAML, allowing for more complex data to be sent on the command line: ``` salt '*' test.echo 'foo: bar' ``` In this case Salt translates the string 'foo: bar' into the dictionary \"{'foo': 'bar'}\" Note Any line that contains a newline will not be parsed by YAML. Now that the basics are covered the time has come to evaluate States. Salt States, or the State System is the component of Salt made for configuration management. The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately. Note Before diving into the state system, a brief overview of how states are constructed will make many of the concepts clearer. Salt states are based on data modeling and build on a low level data structure that is used to execute each state function. Then more logical layers are built on top of each other. The high layers of the state system which this tutorial will cover consists of everything that needs to be known to use states, the two high layers covered here are the sls layer and the highest layer highstate. Understanding the layers of data management in the State System will help with understanding states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset. The state system is built on SLS (SaLt State) formulas. These formulas are built out in files on Salt's file server. To make a very basic SLS formula open up a file under /srv/salt named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied. /srv/salt/vim.sls: ``` vim: pkg.installed ``` Now install vim on the minions by calling the SLS directly: ``` salt '*' state.apply vim ``` This command will invoke the state system and run the vim SLS. Now, to beef up the vim SLS formula, a vimrc can be added: /srv/salt/vim.sls: ``` vim: pkg.installed: [] /etc/vimrc: file.managed: source: salt://vimrc mode: 644 user: root group: root ``` Now the desired vimrc needs to be copied into the Salt file server to /srv/salt/vimrc. In Salt, everything is a file, so no path redirection needs to be accounted" }, { "data": "The vimrc file is placed right next to the vim.sls file. The same command as above can be executed to all the vim SLS formulas and now include managing the file. Note Salt does not need to be restarted/reloaded or have the master manipulated in any way when changing SLS formulas. They are instantly available. Obviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file: /srv/salt/nginx/init.sls: ``` nginx: pkg.installed: [] service.running: require: pkg: nginx ``` A few concepts are introduced in this SLS formula. First is the service statement which ensures that the nginx service is running. Of course, the nginx service can't be started unless the package is installed -- hence the require statement which sets up a dependency between the two. The require statement makes sure that the required component is executed before and that it results in success. Note The require option belongs to a family of options called requisites. Requisites are a powerful component of Salt States, for more information on how requisites work and what is available see: Requisites Also evaluation ordering is available in Salt as well: Ordering States This new sls formula has a special name -- init.sls. When an SLS formula is named init.sls it inherits the name of the directory path that contains it. This formula can be referenced via the following command: ``` salt '*' state.apply nginx ``` Note state.apply is just another remote execution function, just like test.version or disk.usage. It simply takes the name of an SLS file as an argument. Now that subdirectories can be used, the vim.sls formula can be cleaned up. To make things more flexible, move the vim.sls and vimrc into a new subdirectory called edit and change the vim.sls file to reflect the change: /srv/salt/edit/vim.sls: ``` vim: pkg.installed /etc/vimrc: file.managed: source: salt://edit/vimrc mode: 644 user: root group: root ``` Only the source path to the vimrc file has changed. Now the formula is referenced as edit.vim because it resides in the edit subdirectory. Now the edit subdirectory can contain formulas for emacs, nano, joe or any other editor that may need to be deployed. Two walk-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar. Starting States Pillar Walkthrough An understanding of Pillar is extremely helpful in using States. Two more in-depth States tutorials exist, which delve much more deeply into States functionality. How Do I Use Salt States?, covers much more to get off the ground with States. The States Tutorial also provides a fantastic introduction. These tutorials include much more in-depth information including templating SLS formulas etc. This concludes the initial Salt walk-through, but there are many more things still to learn! These documents will cover important core aspects of Salt: Pillar Job Management A few more tutorials are also available: Remote Execution Tutorial Standalone Minion This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the Table of Contents. Generated on May 22, 2024 at 17:25:37 UTC. You are viewing docs built from a recent snapshot of the master branch. Switch to docs for the latest stable release, 3007.1. saltproject.io 2024 VMware, Inc. | Privacy Policy" } ]
{ "category": "Provisioning", "file_name": "py-modindex.html.md", "project_name": "Salt Project", "subcategory": "Automation & Configuration" }
[ { "data": "| 0 | 1 | 2 | |-:|:|:-| | nan | nan | nan | | nan | a | nan | | nan | salt.auth | nan | | nan | salt.auth.auto | nan | | nan | salt.auth.django | nan | | nan | salt.auth.file | nan | | nan | salt.auth.keystone | nan | | nan | salt.auth.ldap | nan | | nan | salt.auth.mysql | nan | | nan | salt.auth.pam | nan | | nan | salt.auth.pki | nan | | nan | salt.auth.rest | nan | | nan | salt.auth.sharedsecret | nan | | nan | salt.auth.yubico | nan | | nan | nan | nan | | nan | b | nan | | nan | salt.beacons | nan | | nan | salt.beacons.adb | nan | | nan | salt.beacons.aix_account | nan | | nan | salt.beacons.avahi_announce | nan | | nan | salt.beacons.bonjour_announce | nan | | nan | salt.beacons.btmp | nan | | nan | salt.beacons.cert_info | nan | | nan | salt.beacons.diskusage | nan | | nan | salt.beacons.glxinfo | nan | | nan | salt.beacons.haproxy | nan | | nan | salt.beacons.inotify | nan | | nan | salt.beacons.journald | nan | | nan | salt.beacons.junosrrekeys | nan | | nan | salt.beacons.load | nan | | nan | salt.beacons.log_beacon | nan | | nan | salt.beacons.memusage | nan | | nan | salt.beacons.napalm_beacon | nan | | nan | salt.beacons.network_info | nan | | nan | salt.beacons.network_settings | nan | | nan | salt.beacons.pkg | nan | | nan | salt.beacons.proxy_example | nan | | nan | salt.beacons.ps | nan | | nan | salt.beacons.salt_monitor | nan | | nan | salt.beacons.salt_proxy | nan | | nan | salt.beacons.sensehat | nan | | nan | salt.beacons.service | nan | | nan | salt.beacons.sh | nan | | nan | salt.beacons.smartos_imgadm | nan | | nan | salt.beacons.smartos_vmadm | nan | | nan | salt.beacons.status | nan | | nan | salt.beacons.swapusage | nan | | nan | salt.beacons.telegrambotmsg | nan | | nan | salt.beacons.twiliotxtmsg | nan | | nan | salt.beacons.watchdog | nan | | nan | salt.beacons.wtmp | nan | | nan | nan | nan | | nan | c | nan | | nan | salt.cache | nan | | nan | salt.cache.consul | nan | | nan | salt.cache.etcd_cache | nan | | nan | salt.cache.localfs | nan | | nan | salt.cache.mysql_cache | nan | | nan | salt.cache.redis_cache | nan | | nan | salt.cloud | nan | | nan | salt.cloud.clouds.aliyun | nan | | nan | salt.cloud.clouds.clc | nan | | nan | salt.cloud.clouds.cloudstack | nan | | nan | salt.cloud.clouds.digitalocean | nan | | nan | salt.cloud.clouds.dimensiondata | nan | | nan | salt.cloud.clouds.ec2 | nan | | nan | salt.cloud.clouds.gce | nan | | nan | salt.cloud.clouds.gogrid | nan | | nan | salt.cloud.clouds.hetzner | nan | | nan | salt.cloud.clouds.joyent | nan | | nan | salt.cloud.clouds.libvirt | nan | | nan | salt.cloud.clouds.linode | nan | | nan | salt.cloud.clouds.lxc | nan | | nan | salt.cloud.clouds.oneandone | nan | | nan | salt.cloud.clouds.opennebula | nan | | nan | salt.cloud.clouds.openstack | nan | | nan | salt.cloud.clouds.packet | nan | | nan | salt.cloud.clouds.parallels | nan | | nan | salt.cloud.clouds.profitbricks | nan | | nan |" }, { "data": "| nan | | nan | salt.cloud.clouds.pyrax | nan | | nan | salt.cloud.clouds.qingcloud | nan | | nan | salt.cloud.clouds.saltify | nan | | nan | salt.cloud.clouds.scaleway | nan | | nan | salt.cloud.clouds.softlayer | nan | | nan | salt.cloud.clouds.softlayer_hw | nan | | nan | salt.cloud.clouds.tencentcloud | nan | | nan | salt.cloud.clouds.vagrant | nan | | nan | salt.cloud.clouds.virtualbox | nan | | nan | salt.cloud.clouds.vmware | nan | | nan | salt.cloud.clouds.vultrpy | nan | | nan | salt.cloud.clouds.xen | nan | | nan | nan | nan | | nan | e | nan | | nan | salt.engines | nan | | nan | salt.engines.docker_events | nan | | nan | salt.engines.fluent | nan | | nan | salt.engines.http_logstash | nan | | nan | salt.engines.ircbot | nan | | nan | salt.engines.junos_syslog | nan | | nan | salt.engines.libvirt_events | nan | | nan | salt.engines.logentries | nan | | nan | salt.engines.logstash_engine | nan | | nan | salt.engines.napalm_syslog | nan | | nan | salt.engines.reactor | nan | | nan | salt.engines.redis_sentinel | nan | | nan | salt.engines.script | nan | | nan | salt.engines.slack | nan | | nan | salt.engines.slackboltengine | nan | | nan | salt.engines.sqs_events | nan | | nan | salt.engines.stalekey | nan | | nan | salt.engines.test | nan | | nan | salt.engines.thorium | nan | | nan | salt.engines.webhook | nan | | nan | salt.exceptions | nan | | nan | salt.executors | nan | | nan | salt.executors.direct_call | nan | | nan | salt.executors.docker | nan | | nan | salt.executors.splay | nan | | nan | salt.executors.sudo | nan | | nan | salt.executors.transactional_update | nan | | nan | nan | nan | | nan | f | nan | | nan | salt.fileserver | nan | | nan | salt.fileserver.gitfs | nan | | nan | salt.fileserver.hgfs | nan | | nan | salt.fileserver.minionfs | nan | | nan | salt.fileserver.roots | nan | | nan | salt.fileserver.s3fs | nan | | nan | salt.fileserver.svnfs | nan | | nan | nan | nan | | nan | g | nan | | nan | salt.grains | nan | | nan | salt.grains.chronos | nan | | nan | salt.grains.cimc | nan | | nan | salt.grains.core | nan | | nan | salt.grains.disks | nan | | nan | salt.grains.esxi | nan | | nan | salt.grains.extra | nan | | nan | salt.grains.fibre_channel | nan | | nan | salt.grains.fx2 | nan | | nan | salt.grains.iscsi | nan | | nan | salt.grains.junos | nan | | nan | salt.grains.lvm | nan | | nan | salt.grains.marathon | nan | | nan | salt.grains.mdadm | nan | | nan | salt.grains.mdata | nan | | nan | salt.grains.metadata | nan | | nan | salt.grains.metadata_gce | nan | | nan | salt.grains.minion_process | nan | | nan | salt.grains.napalm | nan | | nan | salt.grains.nvme | nan | | nan | salt.grains.nxos | nan | | nan | salt.grains.opts | nan | | nan | salt.grains.package | nan | | nan | salt.grains.panos | nan | | nan | salt.grains.pending_reboot | nan | | nan | salt.grains.philips_hue | nan | | nan | salt.grains.rest_sample | nan | | nan | salt.grains.smartos | nan | | nan |" }, { "data": "| nan | | nan | salt.grains.zfs | nan | | nan | nan | nan | | nan | l | nan | | nan | salt.log_handlers | nan | | nan | salt.loghandlers.fluentmod | nan | | nan | salt.loghandlers.log4mongomod | nan | | nan | salt.loghandlers.logstashmod | nan | | nan | salt.loghandlers.sentrymod | nan | | nan | nan | nan | | nan | m | nan | | nan | salt.modules | nan | | nan | salt.modules.acme | nan | | nan | salt.modules.aix_group | nan | | nan | salt.modules.aix_shadow | nan | | nan | salt.modules.aixpkg | nan | | nan | salt.modules.aliases | nan | | nan | salt.modules.alternatives | nan | | nan | salt.modules.ansiblegate | nan | | nan | salt.modules.apache | nan | | nan | salt.modules.apcups | nan | | nan | salt.modules.apf | nan | | nan | salt.modules.apkpkg | nan | | nan | salt.modules.aptly | nan | | nan | salt.modules.aptpkg | nan | | nan | salt.modules.archive | nan | | nan | salt.modules.arista_pyeapi | nan | | nan | salt.modules.artifactory | nan | | nan | salt.modules.at | nan | | nan | salt.modules.at_solaris | nan | | nan | salt.modules.augeas_cfg | nan | | nan | salt.modules.aws_sqs | nan | | nan | salt.modules.bamboohr | nan | | nan | salt.modules.baredoc | nan | | nan | salt.modules.bcache | nan | | nan | salt.modules.beacons | nan | | nan | salt.modules.bigip | nan | | nan | salt.modules.bluez_bluetooth | nan | | nan | salt.modules.boto3_elasticache | nan | | nan | salt.modules.boto3_elasticsearch | nan | | nan | salt.modules.boto3_route53 | nan | | nan | salt.modules.boto3_sns | nan | | nan | salt.modules.boto_apigateway | nan | | nan | salt.modules.boto_asg | nan | | nan | salt.modules.boto_cfn | nan | | nan | salt.modules.boto_cloudfront | nan | | nan | salt.modules.boto_cloudtrail | nan | | nan | salt.modules.boto_cloudwatch | nan | | nan | salt.modules.botocloudwatchevent | nan | | nan | salt.modules.boto_cognitoidentity | nan | | nan | salt.modules.boto_datapipeline | nan | | nan | salt.modules.boto_dynamodb | nan | | nan | salt.modules.boto_ec2 | nan | | nan | salt.modules.boto_efs | nan | | nan | salt.modules.boto_elasticache | nan | | nan | salt.modules.botoelasticsearchdomain | nan | | nan | salt.modules.boto_elb | nan | | nan | salt.modules.boto_elbv2 | nan | | nan | salt.modules.boto_iam | nan | | nan | salt.modules.boto_iot | nan | | nan | salt.modules.boto_kinesis | nan | | nan | salt.modules.boto_kms | nan | | nan | salt.modules.boto_lambda | nan | | nan | salt.modules.boto_rds | nan | | nan | salt.modules.boto_route53 | nan | | nan | salt.modules.boto_s3 | nan | | nan | salt.modules.botos3bucket | nan | | nan | salt.modules.boto_secgroup | nan | | nan | salt.modules.boto_sns | nan | | nan | salt.modules.boto_sqs | nan | | nan | salt.modules.boto_ssm | nan | | nan | salt.modules.boto_vpc | nan | | nan | salt.modules.bower | nan | | nan | salt.modules.bridge | nan | | nan | salt.modules.bsd_shadow | nan | | nan | salt.modules.btrfs | nan | | nan | salt.modules.cabal | nan | | nan | salt.modules.capirca_acl | nan | | nan | salt.modules.cassandra_cql | nan | | nan | salt.modules.celery | nan | | nan | salt.modules.ceph | nan | | nan |" }, { "data": "| nan | | nan | salt.modules.chef | nan | | nan | salt.modules.chocolatey | nan | | nan | salt.modules.chronos | nan | | nan | salt.modules.chroot | nan | | nan | salt.modules.cimc | nan | | nan | salt.modules.ciscoconfparse_mod | nan | | nan | salt.modules.cisconso | nan | | nan | salt.modules.cloud | nan | | nan | salt.modules.cmdmod | nan | | nan | salt.modules.composer | nan | | nan | salt.modules.config | nan | | nan | salt.modules.consul | nan | | nan | salt.modules.container_resource | nan | | nan | salt.modules.cp | nan | | nan | salt.modules.cpan | nan | | nan | salt.modules.cron | nan | | nan | salt.modules.cryptdev | nan | | nan | salt.modules.csf | nan | | nan | salt.modules.cyg | nan | | nan | salt.modules.daemontools | nan | | nan | salt.modules.data | nan | | nan | salt.modules.datadog_api | nan | | nan | salt.modules.ddns | nan | | nan | salt.modules.deb_apache | nan | | nan | salt.modules.deb_postgres | nan | | nan | salt.modules.debconfmod | nan | | nan | salt.modules.debian_ip | nan | | nan | salt.modules.debian_service | nan | | nan | salt.modules.debuild_pkgbuild | nan | | nan | salt.modules.defaults | nan | | nan | salt.modules.devinfo | nan | | nan | salt.modules.devmap | nan | | nan | salt.modules.dig | nan | | nan | salt.modules.disk | nan | | nan | salt.modules.djangomod | nan | | nan | salt.modules.dnsmasq | nan | | nan | salt.modules.dnsutil | nan | | nan | salt.modules.dockercompose | nan | | nan | salt.modules.dockermod | nan | | nan | salt.modules.dpkg_lowpkg | nan | | nan | salt.modules.drac | nan | | nan | salt.modules.dracr | nan | | nan | salt.modules.drbd | nan | | nan | salt.modules.dummyproxy_pkg | nan | | nan | salt.modules.dummyproxy_service | nan | | nan | salt.modules.ebuildpkg | nan | | nan | salt.modules.eix | nan | | nan | salt.modules.elasticsearch | nan | | nan | salt.modules.environ | nan | | nan | salt.modules.eselect | nan | | nan | salt.modules.esxcluster | nan | | nan | salt.modules.esxdatacenter | nan | | nan | salt.modules.esxi | nan | | nan | salt.modules.esxvm | nan | | nan | salt.modules.etcd_mod | nan | | nan | salt.modules.ethtool | nan | | nan | salt.modules.event | nan | | nan | salt.modules.extfs | nan | | nan | salt.modules.file | nan | | nan | salt.modules.firewalld | nan | | nan | salt.modules.freebsd_sysctl | nan | | nan | salt.modules.freebsd_update | nan | | nan | salt.modules.freebsdjail | nan | | nan | salt.modules.freebsdkmod | nan | | nan | salt.modules.freebsdpkg | nan | | nan | salt.modules.freebsdports | nan | | nan | salt.modules.freebsdservice | nan | | nan | salt.modules.freezer | nan | | nan | salt.modules.gcp_addon | nan | | nan | salt.modules.gem | nan | | nan | salt.modules.genesis | nan | | nan | salt.modules.gentoo_service | nan | | nan | salt.modules.gentoolkitmod | nan | | nan | salt.modules.git | nan | | nan | salt.modules.github | nan | | nan | salt.modules.glanceng | nan | | nan | salt.modules.glassfish | nan | | nan | salt.modules.glusterfs | nan | | nan | salt.modules.gnomedesktop | nan | | nan |" }, { "data": "| nan | | nan | salt.modules.gpg | nan | | nan | salt.modules.grafana4 | nan | | nan | salt.modules.grains | nan | | nan | salt.modules.group | A virtual module for group management | | nan | salt.modules.groupadd | nan | | nan | salt.modules.grub_legacy | nan | | nan | salt.modules.guestfs | nan | | nan | salt.modules.hadoop | nan | | nan | salt.modules.haproxyconn | nan | | nan | salt.modules.hashutil | nan | | nan | salt.modules.heat | nan | | nan | salt.modules.helm | nan | | nan | salt.modules.hg | nan | | nan | salt.modules.highstate_doc | nan | | nan | salt.modules.hosts | nan | | nan | salt.modules.http | nan | | nan | salt.modules.icinga2 | nan | | nan | salt.modules.idem | nan | | nan | salt.modules.ifttt | nan | | nan | salt.modules.ilo | nan | | nan | salt.modules.incron | nan | | nan | salt.modules.influxdb08mod | nan | | nan | salt.modules.influxdbmod | nan | | nan | salt.modules.infoblox | nan | | nan | salt.modules.ini_manage | nan | | nan | salt.modules.inspectlib | nan | | nan | salt.modules.inspectlib.collector | nan | | nan | salt.modules.inspectlib.dbhandle | nan | | nan | salt.modules.inspectlib.entities | nan | | nan | salt.modules.inspectlib.exceptions | nan | | nan | salt.modules.inspectlib.fsdb | nan | | nan | salt.modules.inspectlib.kiwiproc | nan | | nan | salt.modules.inspectlib.query | nan | | nan | salt.modules.inspector | nan | | nan | salt.modules.introspect | nan | | nan | salt.modules.iosconfig | nan | | nan | salt.modules.ipmi | nan | | nan | salt.modules.ipset | nan | | nan | salt.modules.iptables | nan | | nan | salt.modules.iwtools | nan | | nan | salt.modules.jboss7 | nan | | nan | salt.modules.jboss7_cli | nan | | nan | salt.modules.jenkinsmod | nan | | nan | salt.modules.jinja | nan | | nan | salt.modules.jira_mod | nan | | nan | salt.modules.junos | nan | | nan | salt.modules.k8s | nan | | nan | salt.modules.kapacitor | nan | | nan | salt.modules.kerberos | nan | | nan | salt.modules.kernelpkg | A virtual module for managing kernel packages | | nan | salt.modules.kernelpkglinuxapt | nan | | nan | salt.modules.kernelpkglinuxyum | nan | | nan | salt.modules.key | nan | | nan | salt.modules.keyboard | nan | | nan | salt.modules.keystone | nan | | nan | salt.modules.keystoneng | nan | | nan | salt.modules.keystore | nan | | nan | salt.modules.kmod | nan | | nan | salt.modules.kubeadm | nan | | nan | salt.modules.kubernetesmod | nan | | nan | salt.modules.launchctl_service | nan | | nan | salt.modules.layman | nan | | nan | salt.modules.ldap3 | nan | | nan | salt.modules.ldapmod | nan | | nan | salt.modules.libcloud_compute | nan | | nan | salt.modules.libcloud_dns | nan | | nan | salt.modules.libcloud_loadbalancer | nan | | nan | salt.modules.libcloud_storage | nan | | nan | salt.modules.linux_acl | nan | | nan | salt.modules.linux_ip | nan | | nan | salt.modules.linux_lvm | nan | | nan | salt.modules.linux_service | nan | | nan | salt.modules.linux_shadow | nan | | nan | salt.modules.linux_sysctl | nan | | nan | salt.modules.localemod | nan | | nan | salt.modules.locate | nan | | nan | salt.modules.logadm | nan | | nan | salt.modules.logmod | nan | | nan | salt.modules.logrotate | nan | | nan |" }, { "data": "| nan | | nan | salt.modules.lxc | nan | | nan | salt.modules.lxd | nan | | nan | salt.modules.mac_assistive | nan | | nan | salt.modules.macbrewpkg | nan | | nan | salt.modules.mac_desktop | nan | | nan | salt.modules.mac_group | nan | | nan | salt.modules.mac_keychain | nan | | nan | salt.modules.mac_pkgutil | nan | | nan | salt.modules.mac_portspkg | nan | | nan | salt.modules.mac_power | nan | | nan | salt.modules.mac_service | nan | | nan | salt.modules.mac_shadow | nan | | nan | salt.modules.mac_softwareupdate | nan | | nan | salt.modules.mac_sysctl | nan | | nan | salt.modules.mac_system | nan | | nan | salt.modules.mac_timezone | nan | | nan | salt.modules.mac_user | nan | | nan | salt.modules.mac_xattr | nan | | nan | salt.modules.macdefaults | nan | | nan | salt.modules.macpackage | nan | | nan | salt.modules.makeconf | nan | | nan | salt.modules.mandrill | nan | | nan | salt.modules.marathon | nan | | nan | salt.modules.match | nan | | nan | salt.modules.mattermost | nan | | nan | salt.modules.mdadm_raid | nan | | nan | salt.modules.mdata | nan | | nan | salt.modules.memcached | nan | | nan | salt.modules.mine | nan | | nan | salt.modules.minion | nan | | nan | salt.modules.mod_random | nan | | nan | salt.modules.modjk | nan | | nan | salt.modules.mongodb | nan | | nan | salt.modules.monit | nan | | nan | salt.modules.moosefs | nan | | nan | salt.modules.mount | nan | | nan | salt.modules.mssql | nan | | nan | salt.modules.msteams | nan | | nan | salt.modules.munin | nan | | nan | salt.modules.mysql | nan | | nan | salt.modules.nacl | nan | | nan | salt.modules.nagios | nan | | nan | salt.modules.nagios_rpc | nan | | nan | salt.modules.namecheap_domains | nan | | nan | salt.modules.namecheapdomainsdns | nan | | nan | salt.modules.namecheapdomainsns | nan | | nan | salt.modules.namecheap_ssl | nan | | nan | salt.modules.namecheap_users | nan | | nan | salt.modules.napalm_bgp | nan | | nan | salt.modules.napalm_formula | nan | | nan | salt.modules.napalm_mod | nan | | nan | salt.modules.napalm_netacl | nan | | nan | salt.modules.napalm_network | nan | | nan | salt.modules.napalm_ntp | nan | | nan | salt.modules.napalm_probes | nan | | nan | salt.modules.napalm_route | nan | | nan | salt.modules.napalm_snmp | nan | | nan | salt.modules.napalm_users | nan | | nan | salt.modules.napalmyangmod | nan | | nan | salt.modules.netaddress | nan | | nan | salt.modules.netbox | nan | | nan | salt.modules.netbsd_sysctl | nan | | nan | salt.modules.netbsdservice | nan | | nan | salt.modules.netmiko_mod | nan | | nan | salt.modules.netscaler | nan | | nan | salt.modules.network | nan | | nan | salt.modules.neutron | nan | | nan | salt.modules.neutronng | nan | | nan | salt.modules.nexus | nan | | nan | salt.modules.nfs3 | nan | | nan | salt.modules.nftables | nan | | nan | salt.modules.nginx | nan | | nan | salt.modules.nilrt_ip | nan | | nan | salt.modules.nix | nan | | nan | salt.modules.nova | nan | | nan | salt.modules.npm | nan | | nan | salt.modules.nspawn | nan | | nan | salt.modules.nxos | nan | | nan | salt.modules.nxos_api | nan | | nan | salt.modules.nxos_upgrade | nan | | nan |" }, { "data": "| nan | | nan | salt.modules.openbsd_sysctl | nan | | nan | salt.modules.openbsdpkg | nan | | nan | salt.modules.openbsdrcctl_service | nan | | nan | salt.modules.openbsdservice | nan | | nan | salt.modules.openscap | nan | | nan | salt.modules.openstack_config | nan | | nan | salt.modules.openstack_mng | nan | | nan | salt.modules.openvswitch | nan | | nan | salt.modules.opkg | nan | | nan | salt.modules.opsgenie | nan | | nan | salt.modules.oracle | nan | | nan | salt.modules.osquery | nan | | nan | salt.modules.out | nan | | nan | salt.modules.pacmanpkg | nan | | nan | salt.modules.pagerduty | nan | | nan | salt.modules.pagerduty_util | nan | | nan | salt.modules.pam | nan | | nan | salt.modules.panos | nan | | nan | salt.modules.parallels | nan | | nan | salt.modules.parted_partition | nan | | nan | salt.modules.pcs | nan | | nan | salt.modules.pdbedit | nan | | nan | salt.modules.pecl | nan | | nan | salt.modules.peeringdb | nan | | nan | salt.modules.pf | nan | | nan | salt.modules.philips_hue | nan | | nan | salt.modules.pillar | nan | | nan | salt.modules.pip | nan | | nan | salt.modules.pkg | A virtual module for installing software packages | | nan | salt.modules.pkg_resource | nan | | nan | salt.modules.pkgin | nan | | nan | salt.modules.pkgng | nan | | nan | salt.modules.pkgutil | nan | | nan | salt.modules.portage_config | nan | | nan | salt.modules.postfix | nan | | nan | salt.modules.postgres | nan | | nan | salt.modules.poudriere | nan | | nan | salt.modules.powerpath | nan | | nan | salt.modules.proxy | nan | | nan | salt.modules.ps | nan | | nan | salt.modules.publish | nan | | nan | salt.modules.puppet | nan | | nan | salt.modules.purefa | nan | | nan | salt.modules.purefb | nan | | nan | salt.modules.pushbullet | nan | | nan | salt.modules.pushover_notify | nan | | nan | salt.modules.pw_group | nan | | nan | salt.modules.pw_user | nan | | nan | salt.modules.pyenv | nan | | nan | salt.modules.qemu_img | nan | | nan | salt.modules.qemu_nbd | nan | | nan | salt.modules.quota | nan | | nan | salt.modules.rabbitmq | nan | | nan | salt.modules.rallydev | nan | | nan | salt.modules.random_org | nan | | nan | salt.modules.rbac_solaris | nan | | nan | salt.modules.rbenv | nan | | nan | salt.modules.rdp | nan | | nan | salt.modules.rebootmgr | nan | | nan | salt.modules.redismod | nan | | nan | salt.modules.reg | nan | | nan | salt.modules.rest_pkg | nan | | nan | salt.modules.restsampleutils | nan | | nan | salt.modules.rest_service | nan | | nan | salt.modules.restartcheck | nan | | nan | salt.modules.restconf | nan | | nan | salt.modules.ret | nan | | nan | salt.modules.rh_ip | nan | | nan | salt.modules.rh_service | nan | | nan | salt.modules.riak | nan | | nan | salt.modules.rpm_lowpkg | nan | | nan | salt.modules.rpmbuild_pkgbuild | nan | | nan | salt.modules.rsync | nan | | nan | salt.modules.runit | nan | | nan | salt.modules.rvm | nan | | nan | salt.modules.s3 | nan | | nan | salt.modules.s6 | nan | | nan | salt.modules.salt_proxy | nan | | nan | salt.modules.salt_version | nan | | nan |" }, { "data": "| nan | | nan | salt.modules.saltcloudmod | nan | | nan | salt.modules.saltutil | nan | | nan | salt.modules.schedule | nan | | nan | salt.modules.scp_mod | nan | | nan | salt.modules.scsi | nan | | nan | salt.modules.sdb | nan | | nan | salt.modules.seed | nan | | nan | salt.modules.selinux | nan | | nan | salt.modules.sensehat | nan | | nan | salt.modules.sensors | nan | | nan | salt.modules.serverdensity_device | nan | | nan | salt.modules.service | A virtual module for service management | | nan | salt.modules.servicenow | nan | | nan | salt.modules.shadow | A virtual module for shadow file / password management | | nan | salt.modules.slack_notify | nan | | nan | salt.modules.slackware_service | nan | | nan | salt.modules.slsutil | nan | | nan | salt.modules.smartos_imgadm | nan | | nan | salt.modules.smartos_nictagadm | nan | | nan | salt.modules.smartos_virt | nan | | nan | salt.modules.smartos_vmadm | nan | | nan | salt.modules.smbios | nan | | nan | salt.modules.smf_service | nan | | nan | salt.modules.smtp | nan | | nan | salt.modules.snapper | nan | | nan | salt.modules.solaris_fmadm | nan | | nan | salt.modules.solaris_group | nan | | nan | salt.modules.solaris_shadow | nan | | nan | salt.modules.solaris_system | nan | | nan | salt.modules.solaris_user | nan | | nan | salt.modules.solarisipspkg | nan | | nan | salt.modules.solarispkg | nan | | nan | salt.modules.solr | nan | | nan | salt.modules.solrcloud | nan | | nan | salt.modules.splunk | nan | | nan | salt.modules.splunk_search | nan | | nan | salt.modules.sqlite3 | nan | | nan | salt.modules.ssh | nan | | nan | salt.modules.ssh_pkg | nan | | nan | salt.modules.ssh_service | nan | | nan | salt.modules.state | nan | | nan | salt.modules.status | nan | | nan | salt.modules.statuspage | nan | | nan | salt.modules.supervisord | nan | | nan | salt.modules.suse_apache | nan | | nan | salt.modules.suse_ip | nan | | nan | salt.modules.svn | nan | | nan | salt.modules.swarm | nan | | nan | salt.modules.swift | nan | | nan | salt.modules.sysbench | nan | | nan | salt.modules.sysctl | A virtual module for managing sysctl parameters | | nan | salt.modules.sysfs | nan | | nan | salt.modules.syslog_ng | nan | | nan | salt.modules.sysmod | nan | | nan | salt.modules.sysrc | nan | | nan | salt.modules.system | nan | | nan | salt.modules.system_profiler | nan | | nan | salt.modules.systemd_service | nan | | nan | salt.modules.telegram | nan | | nan | salt.modules.telemetry | nan | | nan | salt.modules.temp | nan | | nan | salt.modules.test | nan | | nan | salt.modules.test_virtual | nan | | nan | salt.modules.testinframod | nan | | nan | salt.modules.textfsm_mod | nan | | nan | salt.modules.timezone | nan | | nan | salt.modules.tls | nan | | nan | salt.modules.tomcat | nan | | nan | salt.modules.trafficserver | nan | | nan | salt.modules.transactional_update | nan | | nan | salt.modules.travisci | nan | | nan | salt.modules.tuned | nan | | nan | salt.modules.twilio_notify | nan | | nan | salt.modules.udev | nan | | nan | salt.modules.upstart_service | nan | | nan | salt.modules.uptime | nan | | nan |" }, { "data": "| A virtual module for user management | | nan | salt.modules.useradd | nan | | nan | salt.modules.uwsgi | nan | | nan | salt.modules.vagrant | nan | | nan | salt.modules.varnish | nan | | nan | salt.modules.vault | nan | | nan | salt.modules.vbox_guest | nan | | nan | salt.modules.vboxmanage | nan | | nan | salt.modules.vcenter | nan | | nan | salt.modules.victorops | nan | | nan | salt.modules.virt | nan | | nan | salt.modules.virtualenv_mod | nan | | nan | salt.modules.vmctl | nan | | nan | salt.modules.vsphere | nan | | nan | salt.modules.webutil | nan | | nan | salt.modules.win_appx | nan | | nan | salt.modules.win_auditpol | nan | | nan | salt.modules.win_autoruns | nan | | nan | salt.modules.win_certutil | nan | | nan | salt.modules.win_dacl | nan | | nan | salt.modules.win_disk | nan | | nan | salt.modules.win_dism | nan | | nan | salt.modules.windnsclient | nan | | nan | salt.modules.win_dsc | nan | | nan | salt.modules.win_event | nan | | nan | salt.modules.win_file | nan | | nan | salt.modules.win_firewall | nan | | nan | salt.modules.win_groupadd | nan | | nan | salt.modules.win_iis | nan | | nan | salt.modules.win_ip | nan | | nan | salt.modules.win_lgpo | nan | | nan | salt.modules.winlgporeg | nan | | nan | salt.modules.win_license | nan | | nan | salt.modules.win_network | nan | | nan | salt.modules.win_ntp | nan | | nan | salt.modules.win_path | nan | | nan | salt.modules.win_pkg | nan | | nan | salt.modules.win_pki | nan | | nan | salt.modules.win_powercfg | nan | | nan | salt.modules.win_psget | nan | | nan | salt.modules.win_servermanager | nan | | nan | salt.modules.win_service | nan | | nan | salt.modules.win_shadow | nan | | nan | salt.modules.win_shortcut | nan | | nan | salt.modules.winsmtpserver | nan | | nan | salt.modules.win_snmp | nan | | nan | salt.modules.win_status | nan | | nan | salt.modules.win_system | nan | | nan | salt.modules.win_task | nan | | nan | salt.modules.win_timezone | nan | | nan | salt.modules.win_useradd | nan | | nan | salt.modules.win_wua | nan | | nan | salt.modules.win_wusa | nan | | nan | salt.modules.winrepo | nan | | nan | salt.modules.wordpress | nan | | nan | salt.modules.x509 | nan | | nan | salt.modules.x509_v2 | nan | | nan | salt.modules.xapi_virt | nan | | nan | salt.modules.xbpspkg | nan | | nan | salt.modules.xfs | nan | | nan | salt.modules.xml | nan | | nan | salt.modules.xmpp | nan | | nan | salt.modules.yaml | nan | | nan | salt.modules.yumpkg | nan | | nan | salt.modules.zabbix | nan | | nan | salt.modules.zcbuildout | nan | | nan | salt.modules.zenoss | nan | | nan | salt.modules.zfs | nan | | nan | salt.modules.zk_concurrency | nan | | nan | salt.modules.znc | nan | | nan | salt.modules.zoneadm | nan | | nan | salt.modules.zonecfg | nan | | nan | salt.modules.zookeeper | nan | | nan | salt.modules.zpool | nan | | nan | salt.modules.zypperpkg | nan | | nan | nan | nan | | nan | n | nan | | nan | salt.netapi | nan | | nan | salt.netapi.rest_cherrypy | nan | | nan | salt.netapi.rest_cherrypy.app | nan | | nan | salt.netapi.rest_cherrypy.wsgi | nan | | nan |" }, { "data": "| nan | | nan | salt.netapi.rest_tornado.saltnado | nan | | nan | salt.netapi.resttornado.saltnadowebsockets | nan | | nan | salt.netapi.rest_wsgi | nan | | nan | nan | nan | | nan | o | nan | | nan | salt.output | nan | | nan | salt.output.dson | nan | | nan | salt.output.highstate | nan | | nan | salt.output.json_out | nan | | nan | salt.output.key | nan | | nan | salt.output.nested | nan | | nan | salt.output.newlinevaluesonly | nan | | nan | salt.output.nooutquiet | nan | | nan | salt.output.no_return | nan | | nan | salt.output.overstatestage | nan | | nan | salt.output.pony | nan | | nan | salt.output.pprint_out | nan | | nan | salt.output.profile | nan | | nan | salt.output.progress | nan | | nan | salt.output.raw | nan | | nan | salt.output.table_out | nan | | nan | salt.output.txt | nan | | nan | salt.output.virt_query | nan | | nan | salt.output.yaml_out | nan | | nan | nan | nan | | nan | p | nan | | nan | salt.pillar | nan | | nan | salt.pillar.cmd_json | nan | | nan | salt.pillar.cmd_yaml | nan | | nan | salt.pillar.cmd_yamlex | nan | | nan | salt.pillar.cobbler | nan | | nan | salt.pillar.confidant | nan | | nan | salt.pillar.consul_pillar | nan | | nan | salt.pillar.csvpillar | nan | | nan | salt.pillar.digicert | nan | | nan | salt.pillar.django_orm | nan | | nan | salt.pillar.ec2_pillar | nan | | nan | salt.pillar.etcd_pillar | nan | | nan | salt.pillar.extraminiondatainpillar | nan | | nan | salt.pillar.file_tree | nan | | nan | salt.pillar.foreman | nan | | nan | salt.pillar.git_pillar | nan | | nan | salt.pillar.gpg | nan | | nan | salt.pillar.hg_pillar | nan | | nan | salt.pillar.hiera | nan | | nan | salt.pillar.http_json | nan | | nan | salt.pillar.http_yaml | nan | | nan | salt.pillar.libvirt | nan | | nan | salt.pillar.makostack | nan | | nan | salt.pillar.mongo | nan | | nan | salt.pillar.mysql | nan | | nan | salt.pillar.nacl | nan | | nan | salt.pillar.netbox | nan | | nan | salt.pillar.neutron | nan | | nan | salt.pillar.nodegroups | nan | | nan | salt.pillar.pepa | nan | | nan | salt.pillar.pillar_ldap | nan | | nan | salt.pillar.postgres | nan | | nan | salt.pillar.puppet | nan | | nan | salt.pillar.reclass_adapter | nan | | nan | salt.pillar.redismod | nan | | nan | salt.pillar.rethinkdb_pillar | nan | | nan | salt.pillar.s3 | nan | | nan | salt.pillar.saltclass | nan | | nan | salt.pillar.sql_base | nan | | nan | salt.pillar.sqlcipher | nan | | nan | salt.pillar.sqlite3 | nan | | nan | salt.pillar.stack | nan | | nan | salt.pillar.svn_pillar | nan | | nan | salt.pillar.varstack_pillar | nan | | nan | salt.pillar.vault | nan | | nan | salt.pillar.venafi | nan | | nan | salt.pillar.virtkey | nan | | nan | salt.pillar.vmware_pillar | nan | | nan | salt.proxy | nan | | nan | salt.proxy.arista_pyeapi | nan | | nan | salt.proxy.chronos | nan | | nan | salt.proxy.cimc | nan | | nan | salt.proxy.cisconso | nan | | nan | salt.proxy.deltaproxy | nan | | nan |" }, { "data": "| nan | | nan | salt.proxy.dummy | nan | | nan | salt.proxy.esxcluster | nan | | nan | salt.proxy.esxdatacenter | nan | | nan | salt.proxy.esxi | nan | | nan | salt.proxy.esxvm | nan | | nan | salt.proxy.fx2 | nan | | nan | salt.proxy.junos | nan | | nan | salt.proxy.marathon | nan | | nan | salt.proxy.napalm | nan | | nan | salt.proxy.netmiko_px | nan | | nan | salt.proxy.nxos | nan | | nan | salt.proxy.nxos_api | nan | | nan | salt.proxy.panos | nan | | nan | salt.proxy.philips_hue | nan | | nan | salt.proxy.rest_sample | nan | | nan | salt.proxy.restconf | nan | | nan | salt.proxy.ssh_sample | nan | | nan | salt.proxy.vcenter | nan | | nan | nan | nan | | nan | q | nan | | nan | salt.queues | nan | | nan | salt.queues.pgjsonb_queue | nan | | nan | salt.queues.sqlite_queue | nan | | nan | nan | nan | | nan | r | nan | | nan | salt.renderers | nan | | nan | salt.renderers.aws_kms | nan | | nan | salt.renderers.cheetah | nan | | nan | salt.renderers.dson | nan | | nan | salt.renderers.genshi | nan | | nan | salt.renderers.gpg | nan | | nan | salt.renderers.hjson | nan | | nan | salt.renderers.jinja | nan | | nan | salt.renderers.json | nan | | nan | salt.renderers.json5 | nan | | nan | salt.renderers.mako | nan | | nan | salt.renderers.msgpack | nan | | nan | salt.renderers.nacl | nan | | nan | salt.renderers.pass | nan | | nan | salt.renderers.py | nan | | nan | salt.renderers.pydsl | nan | | nan | salt.renderers.pyobjects | nan | | nan | salt.renderers.stateconf | nan | | nan | salt.renderers.tomlmod | nan | | nan | salt.renderers.wempy | nan | | nan | salt.renderers.yaml | nan | | nan | salt.renderers.yamlex | nan | | nan | salt.returners | nan | | nan | salt.returners.appoptics_return | nan | | nan | salt.returners.carbon_return | nan | | nan | salt.returners.cassandracqlreturn | nan | | nan | salt.returners.couchbase_return | nan | | nan | salt.returners.couchdb_return | nan | | nan | salt.returners.elasticsearch_return | nan | | nan | salt.returners.etcd_return | nan | | nan | salt.returners.highstate_return | nan | | nan | salt.returners.influxdb_return | nan | | nan | salt.returners.kafka_return | nan | | nan | salt.returners.librato_return | nan | | nan | salt.returners.local | nan | | nan | salt.returners.local_cache | nan | | nan | salt.returners.mattermost_returner | nan | | nan | salt.returners.memcache_return | nan | | nan | salt.returners.mongofuturereturn | nan | | nan | salt.returners.mongo_return | nan | | nan | salt.returners.multi_returner | nan | | nan | salt.returners.mysql | nan | | nan | salt.returners.nagiosnrdpreturn | nan | | nan | salt.returners.odbc | nan | | nan | salt.returners.pgjsonb | nan | | nan | salt.returners.postgres | nan | | nan | salt.returners.postgreslocalcache | nan | | nan | salt.returners.pushover_returner | nan | | nan | salt.returners.rawfile_json | nan | | nan | salt.returners.redis_return | nan | | nan | salt.returners.sentry_return | nan | | nan | salt.returners.slack_returner | nan | | nan | salt.returners.slackwebhookreturn | nan | | nan | salt.returners.sms_return | nan | | nan | salt.returners.smtp_return | nan | | nan |" }, { "data": "| nan | | nan | salt.returners.sqlite3_return | nan | | nan | salt.returners.syslog_return | nan | | nan | salt.returners.telegram_return | nan | | nan | salt.returners.xmpp_return | nan | | nan | salt.returners.zabbix_return | nan | | nan | salt.roster | nan | | nan | salt.roster.ansible | nan | | nan | salt.roster.cache | nan | | nan | salt.roster.cloud | nan | | nan | salt.roster.clustershell | nan | | nan | salt.roster.dir | nan | | nan | salt.roster.flat | nan | | nan | salt.roster.range | nan | | nan | salt.roster.scan | nan | | nan | salt.roster.sshconfig | nan | | nan | salt.roster.sshknownhosts | nan | | nan | salt.roster.terraform | nan | | nan | salt.runners | nan | | nan | salt.runners.asam | nan | | nan | salt.runners.auth | nan | | nan | salt.runners.bgp | nan | | nan | salt.runners.cache | nan | | nan | salt.runners.cloud | nan | | nan | salt.runners.config | nan | | nan | salt.runners.ddns | nan | | nan | salt.runners.digicertapi | nan | | nan | salt.runners.doc | nan | | nan | salt.runners.drac | nan | | nan | salt.runners.error | nan | | nan | salt.runners.event | nan | | nan | salt.runners.f5 | nan | | nan | salt.runners.fileserver | nan | | nan | salt.runners.git_pillar | nan | | nan | salt.runners.http | nan | | nan | salt.runners.jobs | nan | | nan | salt.runners.launchd | nan | | nan | salt.runners.lxc | nan | | nan | salt.runners.manage | nan | | nan | salt.runners.match | nan | | nan | salt.runners.mattermost | nan | | nan | salt.runners.mine | nan | | nan | salt.runners.nacl | nan | | nan | salt.runners.net | nan | | nan | salt.runners.network | nan | | nan | salt.runners.pagerduty | nan | | nan | salt.runners.pillar | nan | | nan | salt.runners.pkg | nan | | nan | salt.runners.queue | nan | | nan | salt.runners.reactor | nan | | nan | salt.runners.salt | nan | | nan | salt.runners.saltutil | nan | | nan | salt.runners.sdb | nan | | nan | salt.runners.smartos_vmadm | nan | | nan | salt.runners.spacewalk | nan | | nan | salt.runners.ssh | nan | | nan | salt.runners.state | nan | | nan | salt.runners.survey | nan | | nan | salt.runners.test | nan | | nan | salt.runners.thin | nan | | nan | salt.runners.vault | nan | | nan | salt.runners.venafiapi | nan | | nan | salt.runners.virt | nan | | nan | salt.runners.vistara | nan | | nan | salt.runners.winrepo | nan | | nan | nan | nan | | nan | s | nan | | nan | salt.sdb | nan | | nan | salt.sdb.cache | nan | | nan | salt.sdb.confidant | nan | | nan | salt.sdb.consul | nan | | nan | salt.sdb.couchdb | nan | | nan | salt.sdb.env | nan | | nan | salt.sdb.etcd_db | nan | | nan | salt.sdb.keyring_db | nan | | nan | salt.sdb.memcached | nan | | nan | salt.sdb.redis_sdb | nan | | nan | salt.sdb.rest | nan | | nan | salt.sdb.sqlite3 | nan | | nan | salt.sdb.tism | nan | | nan |" }, { "data": "| nan | | nan | salt.sdb.yaml | nan | | nan | salt.serializers | nan | | nan | salt.serializers.configparser | nan | | nan | salt.serializers.json | nan | | nan | salt.serializers.keyvalue | nan | | nan | salt.serializers.msgpack | nan | | nan | salt.serializers.plist | nan | | nan | salt.serializers.python | nan | | nan | salt.serializers.tomlmod | nan | | nan | salt.states | nan | | nan | salt.states.acme | nan | | nan | salt.states.alias | nan | | nan | salt.states.alternatives | nan | | nan | salt.states.ansiblegate | nan | | nan | salt.states.apache | nan | | nan | salt.states.apache_conf | nan | | nan | salt.states.apache_module | nan | | nan | salt.states.apache_site | nan | | nan | salt.states.aptpkg | nan | | nan | salt.states.archive | nan | | nan | salt.states.artifactory | nan | | nan | salt.states.at | nan | | nan | salt.states.augeas | nan | | nan | salt.states.aws_sqs | nan | | nan | salt.states.beacon | nan | | nan | salt.states.bigip | nan | | nan | salt.states.blockdev | nan | | nan | salt.states.boto3_elasticache | nan | | nan | salt.states.boto3_elasticsearch | nan | | nan | salt.states.boto3_route53 | nan | | nan | salt.states.boto3_sns | nan | | nan | salt.states.boto_apigateway | nan | | nan | salt.states.boto_asg | nan | | nan | salt.states.boto_cfn | nan | | nan | salt.states.boto_cloudfront | nan | | nan | salt.states.boto_cloudtrail | nan | | nan | salt.states.botocloudwatchalarm | nan | | nan | salt.states.botocloudwatchevent | nan | | nan | salt.states.boto_cognitoidentity | nan | | nan | salt.states.boto_datapipeline | nan | | nan | salt.states.boto_dynamodb | nan | | nan | salt.states.boto_ec2 | nan | | nan | salt.states.boto_elasticache | nan | | nan | salt.states.botoelasticsearchdomain | nan | | nan | salt.states.boto_elb | nan | | nan | salt.states.boto_elbv2 | nan | | nan | salt.states.boto_iam | nan | | nan | salt.states.botoiamrole | nan | | nan | salt.states.boto_iot | nan | | nan | salt.states.boto_kinesis | nan | | nan | salt.states.boto_kms | nan | | nan | salt.states.boto_lambda | nan | | nan | salt.states.boto_lc | nan | | nan | salt.states.boto_rds | nan | | nan | salt.states.boto_route53 | nan | | nan | salt.states.boto_s3 | nan | | nan | salt.states.botos3bucket | nan | | nan | salt.states.boto_secgroup | nan | | nan | salt.states.boto_sns | nan | | nan | salt.states.boto_sqs | nan | | nan | salt.states.boto_vpc | nan | | nan | salt.states.bower | nan | | nan | salt.states.btrfs | nan | | nan | salt.states.cabal | nan | | nan | salt.states.ceph | nan | | nan | salt.states.chef | nan | | nan | salt.states.chocolatey | nan | | nan | salt.states.chronos_job | nan | | nan | salt.states.cimc | nan | | nan | salt.states.cisconso | nan | | nan | salt.states.cloud | nan | | nan | salt.states.cmd | nan | | nan | salt.states.composer | nan | | nan | salt.states.consul | nan | | nan | salt.states.cron | nan | | nan | salt.states.cryptdev | nan | | nan | salt.states.csf | nan | | nan | salt.states.cyg | nan | | nan | salt.states.ddns | nan | | nan | salt.states.debconfmod | nan | | nan |" }, { "data": "| nan | | nan | salt.states.disk | nan | | nan | salt.states.docker_container | nan | | nan | salt.states.docker_image | nan | | nan | salt.states.docker_network | nan | | nan | salt.states.docker_volume | nan | | nan | salt.states.drac | nan | | nan | salt.states.dvs | nan | | nan | salt.states.elasticsearch | nan | | nan | salt.states.elasticsearch_index | nan | | nan | salt.states.elasticsearchindextemplate | nan | | nan | salt.states.environ | nan | | nan | salt.states.eselect | nan | | nan | salt.states.esxcluster | nan | | nan | salt.states.esxdatacenter | nan | | nan | salt.states.esxi | nan | | nan | salt.states.esxvm | nan | | nan | salt.states.etcd_mod | nan | | nan | salt.states.ethtool | nan | | nan | salt.states.event | nan | | nan | salt.states.file | nan | | nan | salt.states.firewall | nan | | nan | salt.states.firewalld | nan | | nan | salt.states.gem | nan | | nan | salt.states.git | nan | | nan | salt.states.github | nan | | nan | salt.states.glance_image | nan | | nan | salt.states.glassfish | nan | | nan | salt.states.glusterfs | nan | | nan | salt.states.gnomedesktop | nan | | nan | salt.states.gpg | nan | | nan | salt.states.grafana | nan | | nan | salt.states.grafana4_dashboard | nan | | nan | salt.states.grafana4_datasource | nan | | nan | salt.states.grafana4_org | nan | | nan | salt.states.grafana4_user | nan | | nan | salt.states.grafana_dashboard | nan | | nan | salt.states.grafana_datasource | nan | | nan | salt.states.grains | nan | | nan | salt.states.group | nan | | nan | salt.states.heat | nan | | nan | salt.states.helm | nan | | nan | salt.states.hg | nan | | nan | salt.states.highstate_doc | nan | | nan | salt.states.host | nan | | nan | salt.states.http | nan | | nan | salt.states.icinga2 | nan | | nan | salt.states.idem | nan | | nan | salt.states.ifttt | nan | | nan | salt.states.incron | nan | | nan | salt.states.influxdb08_database | nan | | nan | salt.states.influxdb08_user | nan | | nan | salt.states.influxdbcontinuousquery | nan | | nan | salt.states.influxdb_database | nan | | nan | salt.states.influxdbretentionpolicy | nan | | nan | salt.states.influxdb_user | nan | | nan | salt.states.infoblox_a | nan | | nan | salt.states.infoblox_cname | nan | | nan | salt.states.infobloxhostrecord | nan | | nan | salt.states.infoblox_range | nan | | nan | salt.states.ini_manage | nan | | nan | salt.states.ipmi | nan | | nan | salt.states.ipset | nan | | nan | salt.states.iptables | nan | | nan | salt.states.jboss7 | nan | | nan | salt.states.jenkins | nan | | nan | salt.states.junos | nan | | nan | salt.states.kapacitor | nan | | nan | salt.states.kernelpkg | nan | | nan | salt.states.keyboard | nan | | nan | salt.states.keystone | nan | | nan | salt.states.keystone_domain | nan | | nan | salt.states.keystone_endpoint | nan | | nan | salt.states.keystone_group | nan | | nan | salt.states.keystone_project | nan | | nan | salt.states.keystone_role | nan | | nan | salt.states.keystonerolegrant | nan | | nan | salt.states.keystone_service | nan | | nan | salt.states.keystone_user | nan | | nan | salt.states.keystore | nan | | nan | salt.states.kmod | nan | | nan |" }, { "data": "| nan | | nan | salt.states.layman | nan | | nan | salt.states.ldap | nan | | nan | salt.states.libcloud_dns | nan | | nan | salt.states.libcloud_loadbalancer | nan | | nan | salt.states.libcloud_storage | nan | | nan | salt.states.linux_acl | nan | | nan | salt.states.locale | nan | | nan | salt.states.logadm | nan | | nan | salt.states.logrotate | nan | | nan | salt.states.loop | nan | | nan | salt.states.lvm | nan | | nan | salt.states.lvs_server | nan | | nan | salt.states.lvs_service | nan | | nan | salt.states.lxc | nan | | nan | salt.states.lxd | nan | | nan | salt.states.lxd_container | nan | | nan | salt.states.lxd_image | nan | | nan | salt.states.lxd_profile | nan | | nan | salt.states.mac_assistive | nan | | nan | salt.states.mac_keychain | nan | | nan | salt.states.mac_xattr | nan | | nan | salt.states.macdefaults | nan | | nan | salt.states.macpackage | nan | | nan | salt.states.makeconf | nan | | nan | salt.states.marathon_app | nan | | nan | salt.states.mdadm_raid | nan | | nan | salt.states.memcached | nan | | nan | salt.states.modjk | nan | | nan | salt.states.modjk_worker | nan | | nan | salt.states.module | nan | | nan | salt.states.mongodb_database | nan | | nan | salt.states.mongodb_user | nan | | nan | salt.states.monit | nan | | nan | salt.states.mount | nan | | nan | salt.states.mssql_database | nan | | nan | salt.states.mssql_login | nan | | nan | salt.states.mssql_role | nan | | nan | salt.states.mssql_user | nan | | nan | salt.states.msteams | nan | | nan | salt.states.mysql_database | nan | | nan | salt.states.mysql_grants | nan | | nan | salt.states.mysql_query | nan | | nan | salt.states.mysql_user | nan | | nan | salt.states.netnapalmyang | nan | | nan | salt.states.netacl | nan | | nan | salt.states.netconfig | nan | | nan | salt.states.netntp | nan | | nan | salt.states.netsnmp | nan | | nan | salt.states.netusers | nan | | nan | salt.states.network | nan | | nan | salt.states.neutron_network | nan | | nan | salt.states.neutron_secgroup | nan | | nan | salt.states.neutronsecgrouprule | nan | | nan | salt.states.neutron_subnet | nan | | nan | salt.states.nexus | nan | | nan | salt.states.nfs_export | nan | | nan | salt.states.nftables | nan | | nan | salt.states.npm | nan | | nan | salt.states.ntp | nan | | nan | salt.states.nxos | nan | | nan | salt.states.nxos_upgrade | nan | | nan | salt.states.openstack_config | nan | | nan | salt.states.openvswitch_bridge | nan | | nan | salt.states.openvswitch_db | nan | | nan | salt.states.openvswitch_port | nan | | nan | salt.states.opsgenie | nan | | nan | salt.states.pagerduty | nan | | nan | salt.states.pagerdutyescalationpolicy | nan | | nan | salt.states.pagerduty_schedule | nan | | nan | salt.states.pagerduty_service | nan | | nan | salt.states.pagerduty_user | nan | | nan | salt.states.panos | nan | | nan | salt.states.pbm | nan | | nan | salt.states.pcs | nan | | nan | salt.states.pdbedit | nan | | nan | salt.states.pecl | nan | | nan | salt.states.pip_state | nan | | nan | salt.states.pkg | nan | | nan | salt.states.pkgbuild | nan | | nan | salt.states.pkgng | nan | | nan |" }, { "data": "| nan | | nan | salt.states.portage_config | nan | | nan | salt.states.ports | nan | | nan | salt.states.postgres_cluster | nan | | nan | salt.states.postgres_database | nan | | nan | salt.states.postgres_extension | nan | | nan | salt.states.postgres_group | nan | | nan | salt.states.postgres_initdb | nan | | nan | salt.states.postgres_language | nan | | nan | salt.states.postgres_privileges | nan | | nan | salt.states.postgres_schema | nan | | nan | salt.states.postgres_tablespace | nan | | nan | salt.states.postgres_user | nan | | nan | salt.states.powerpath | nan | | nan | salt.states.probes | nan | | nan | salt.states.process | nan | | nan | salt.states.proxy | nan | | nan | salt.states.pushover | nan | | nan | salt.states.pyenv | nan | | nan | salt.states.pyrax_queues | nan | | nan | salt.states.quota | nan | | nan | salt.states.rabbitmq_cluster | nan | | nan | salt.states.rabbitmq_plugin | nan | | nan | salt.states.rabbitmq_policy | nan | | nan | salt.states.rabbitmq_upstream | nan | | nan | salt.states.rabbitmq_user | nan | | nan | salt.states.rabbitmq_vhost | nan | | nan | salt.states.rbac_solaris | nan | | nan | salt.states.rbenv | nan | | nan | salt.states.rdp | nan | | nan | salt.states.redismod | nan | | nan | salt.states.reg | nan | | nan | salt.states.restconf | nan | | nan | salt.states.rsync | nan | | nan | salt.states.rvm | nan | | nan | salt.states.salt_proxy | nan | | nan | salt.states.saltmod | nan | | nan | salt.states.saltutil | nan | | nan | salt.states.schedule | nan | | nan | salt.states.selinux | nan | | nan | salt.states.serverdensity_device | nan | | nan | salt.states.service | nan | | nan | salt.states.slack | nan | | nan | salt.states.smartos | nan | | nan | salt.states.smtp | nan | | nan | salt.states.snapper | nan | | nan | salt.states.solrcloud | nan | | nan | salt.states.splunk | nan | | nan | salt.states.splunk_search | nan | | nan | salt.states.sqlite3 | nan | | nan | salt.states.ssh_auth | nan | | nan | salt.states.sshknownhosts | nan | | nan | salt.states.stateconf | nan | | nan | salt.states.status | nan | | nan | salt.states.statuspage | nan | | nan | salt.states.supervisord | nan | | nan | salt.states.svn | nan | | nan | salt.states.sysctl | nan | | nan | salt.states.sysfs | nan | | nan | salt.states.syslog_ng | nan | | nan | salt.states.sysrc | nan | | nan | salt.states.telemetry_alert | nan | | nan | salt.states.test | nan | | nan | salt.states.testinframod | nan | | nan | salt.states.timezone | nan | | nan | salt.states.tls | nan | | nan | salt.states.tomcat | nan | | nan | salt.states.trafficserver | nan | | nan | salt.states.tuned | nan | | nan | salt.states.uptime | nan | | nan | salt.states.user | nan | | nan | salt.states.vagrant | nan | | nan | salt.states.vault | nan | | nan | salt.states.vbox_guest | nan | | nan | salt.states.victorops | nan | | nan | salt.states.virt | nan | | nan | salt.states.virtualenv_mod | nan | | nan | salt.states.webutil | nan | | nan | salt.states.win_appx | nan | | nan | salt.states.win_certutil | nan | | nan | salt.states.win_dacl | nan | | nan |" }, { "data": "| nan | | nan | salt.states.windnsclient | nan | | nan | salt.states.win_firewall | nan | | nan | salt.states.win_iis | nan | | nan | salt.states.win_lgpo | nan | | nan | salt.states.winlgporeg | nan | | nan | salt.states.win_license | nan | | nan | salt.states.win_network | nan | | nan | salt.states.win_path | nan | | nan | salt.states.win_pki | nan | | nan | salt.states.win_powercfg | nan | | nan | salt.states.win_servermanager | nan | | nan | salt.states.win_shortcut | nan | | nan | salt.states.winsmtpserver | nan | | nan | salt.states.win_snmp | nan | | nan | salt.states.win_system | nan | | nan | salt.states.win_task | nan | | nan | salt.states.win_wua | nan | | nan | salt.states.win_wusa | nan | | nan | salt.states.winrepo | nan | | nan | salt.states.wordpress | nan | | nan | salt.states.x509 | nan | | nan | salt.states.x509_v2 | nan | | nan | salt.states.xml | nan | | nan | salt.states.xmpp | nan | | nan | salt.states.zabbix_action | nan | | nan | salt.states.zabbix_host | nan | | nan | salt.states.zabbix_hostgroup | nan | | nan | salt.states.zabbix_mediatype | nan | | nan | salt.states.zabbix_template | nan | | nan | salt.states.zabbix_user | nan | | nan | salt.states.zabbix_usergroup | nan | | nan | salt.states.zabbix_usermacro | nan | | nan | salt.states.zabbix_valuemap | nan | | nan | salt.states.zcbuildout | nan | | nan | salt.states.zenoss | nan | | nan | salt.states.zfs | nan | | nan | salt.states.zk_concurrency | nan | | nan | salt.states.zone | nan | | nan | salt.states.zookeeper | nan | | nan | salt.states.zpool | nan | | nan | nan | nan | | nan | t | nan | | nan | salt.thorium | nan | | nan | salt.thorium.calc | nan | | nan | salt.thorium.check | nan | | nan | salt.thorium.file | nan | | nan | salt.thorium.key | nan | | nan | salt.thorium.local | nan | | nan | salt.thorium.reg | nan | | nan | salt.thorium.runner | nan | | nan | salt.thorium.status | nan | | nan | salt.thorium.timer | nan | | nan | salt.thorium.wheel | nan | | nan | salt.tokens | nan | | nan | salt.tokens.localfs | nan | | nan | salt.tokens.rediscluster | nan | | nan | salt.tops | nan | | nan | salt.tops.cobbler | nan | | nan | salt.tops.ext_nodes | nan | | nan | salt.tops.mongo | nan | | nan | salt.tops.reclass_adapter | nan | | nan | salt.tops.saltclass | nan | | nan | salt.tops.varstack_top | nan | | nan | nan | nan | | nan | u | nan | | nan | salt.utils | nan | | nan | salt.utils.extend | nan | | nan | nan | nan | | nan | w | nan | | nan | salt.wheel | nan | | nan | salt.wheel.config | nan | | nan | salt.wheel.error | nan | | nan | salt.wheel.file_roots | nan | | nan | salt.wheel.key | nan | | nan | salt.wheel.minions | nan | | nan | salt.wheel.pillar_roots | nan | Generated on May 22, 2024 at 17:25:40 UTC. You are viewing docs for the latest stable release, 3007.1. Switch to docs for the previous stable release, 3006.8, or to a recent doc build from the master branch. saltproject.io 2024 VMware, Inc. | Privacy Policy" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Tinkerbell", "subcategory": "Automation & Configuration" }
[ { "data": "This guide lets you quickly evaluate Tinkerbell. Well walk you through setting up the Tinkerbell stack and provisioning a sample machine. See the install guide for details on setting up Tinkerbell in a production environment. There are couple of ways to get started with Tinkerbell, pick the one that suits you best. Follow these steps to create the stack on a Libvirt VM using Vagrant. Then deploy a VM and provision an OS onto it. Clone this repository ``` git clone https://github.com/tinkerbell/playground.git cd playground ``` Start the stack ``` cd vagrant vagrant up ``` ``` Bringing machine 'stack' up with 'libvirt' provider... ==> stack: Checking if box 'generic/ubuntu2204' version '4.3.4' is up to date... ==> stack: Creating image (snapshot of base box volume). ==> stack: Creating domain with the following settings... ==> stack: -- Name: vagrant_stack ==> stack: -- Description: Source: /home/tink/repos/tinkerbell/sandbox/vagrant/Vagrantfile ==> stack: -- Domain type: kvm ==> stack: -- Cpus: 2 ==> stack: -- Feature: acpi ==> stack: -- Feature: apic ==> stack: -- Feature: pae ==> stack: -- Clock offset: utc ==> stack: -- Memory: 2048M ==> stack: -- Base box: generic/ubuntu2204 ==> stack: -- Storage pool: default ==> stack: -- Image(vda): /var/lib/libvirt/images/vagrant_stack.img, virtio, 128G ==> stack: -- Disk driver opts: cache='default' ==> stack: -- Graphics Type: vnc ==> stack: -- Video Type: cirrus ==> stack: -- Video VRAM: 256 ==> stack: -- Video 3D accel: false ==> stack: -- Keymap: en-us ==> stack: -- TPM Backend: passthrough ==> stack: -- INPUT: type=mouse, bus=ps2 ==> stack: Creating shared folders metadata... ==> stack: Starting domain. ==> stack: Domain launching with graphics connection settings... ==> stack: -- Graphics Port: 5900 ==> stack: -- Graphics IP: 127.0.0.1 ==> stack: -- Graphics Password: Not defined ==> stack: -- Graphics Websocket: 5700 ==> stack: Waiting for domain to get an IP address... ==> stack: Waiting for machine to boot. This may take a few minutes... stack: SSH address: 192.168.121.127:22 stack: SSH username: vagrant stack: SSH auth method: private key stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: Warning: Connection refused. Retrying... stack: stack: Vagrant insecure key detected. Vagrant will automatically replace stack: this with a newly generated keypair for better security. stack: stack: Inserting generated public key within guest... stack: Removing insecure key from the guest if it's present... stack: Key inserted! Disconnecting and reconnecting using new SSH key... ==> stack: Machine booted and ready! ==> stack: Rsyncing folder: /home/tink/repos/tinkerbell/sandbox/vagrant/ => /sandbox/stack ==> stack: Configuring and enabling network interfaces... ==> stack: Running provisioner: shell... stack: Running: /tmp/vagrant-shell20231031-285946-1krhzm0.sh stack: + main 192.168.56.4 192.168.56.43 08:00:27:9e:f5:3a /sandbox/stack/ 192.168.56.5 0.4.2 eth1 1.28.3 v5.6.0 '' stack: + local host_ip=192.168.56.4 stack: + local worker_ip=192.168.56.43 stack: + local worker_mac=08:00:27:9e:f5:3a stack: + local manifests_dir=/sandbox/stack/ stack: + local loadbalancer_ip=192.168.56.5 stack: + local helmchartversion=0.4.2 stack: + local loadbalancer_interface=eth1 stack: + local kubectl_version=1.28.3 stack: + local" }, { "data": "stack: + update_apt stack: + apt-get update stack: + DEBIAN_FRONTEND=noninteractive stack: + command apt-get --allow-change-held-packages --allow-downgrades --allow-remove-essential --allow-unauthenticated --option Dpkg::Options::=--force-confdef --option Dpkg::Options::=--force-confold --yes update stack: Hit:1 https://mirrors.edge.kernel.org/ubuntu jammy InRelease stack: Get:2 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease [119 kB] stack: Get:3 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease [109 kB] stack: Get:4 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease [110 kB] stack: Get:5 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 Packages [1,148 kB] stack: Get:6 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main Translation-en [245 kB] stack: Get:7 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 c-n-f Metadata [16.1 kB] stack: Get:8 https://mirrors.edge.kernel.org/ubuntu jammy-updates/restricted amd64 Packages [1,103 kB] stack: Get:9 https://mirrors.edge.kernel.org/ubuntu jammy-updates/restricted Translation-en [179 kB] stack: Get:10 https://mirrors.edge.kernel.org/ubuntu jammy-updates/restricted amd64 c-n-f Metadata [536 B] stack: Get:11 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 Packages [998 kB] stack: Get:12 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe Translation-en [218 kB] stack: Get:13 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 c-n-f Metadata [22.0 kB] stack: Get:14 https://mirrors.edge.kernel.org/ubuntu jammy-backports/main amd64 Packages [64.2 kB] stack: Get:15 https://mirrors.edge.kernel.org/ubuntu jammy-backports/main amd64 c-n-f Metadata [388 B] stack: Get:16 https://mirrors.edge.kernel.org/ubuntu jammy-backports/universe amd64 Packages [27.8 kB] stack: Get:17 https://mirrors.edge.kernel.org/ubuntu jammy-backports/universe amd64 c-n-f Metadata [644 B] stack: Get:18 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 Packages [938 kB] stack: Get:19 https://mirrors.edge.kernel.org/ubuntu jammy-security/main Translation-en [185 kB] stack: Get:20 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 c-n-f Metadata [11.4 kB] stack: Get:21 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted amd64 Packages [1,079 kB] stack: Get:22 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted Translation-en [175 kB] stack: Get:23 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted amd64 c-n-f Metadata [536 B] stack: Get:24 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe amd64 Packages [796 kB] stack: Get:25 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe Translation-en [146 kB] stack: Get:26 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe amd64 c-n-f Metadata [16.8 kB] stack: Fetched 7,709 kB in 2s (4,266 kB/s) stack: Reading package lists... stack: + install_docker stack: + curl -fsSL https://download.docker.com/linux/ubuntu/gpg stack: + sudo apt-key add - stack: Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). stack: OK stack: ++ lsb_release -cs stack: + add-apt-repository 'deb https://download.docker.com/linux/ubuntu jammy stable' stack: Get:1 https://download.docker.com/linux/ubuntu jammy InRelease [48.8 kB] stack: Get:2 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages [22.7 kB] stack: Hit:3 https://mirrors.edge.kernel.org/ubuntu jammy InRelease stack: Hit:4 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease stack: Hit:5 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease stack: Hit:6 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease stack: Fetched 71.5 kB in 6s (11.8 kB/s) stack: Reading package lists... stack: W: https://download.docker.com/linux/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details. stack: Repository: 'deb https://download.docker.com/linux/ubuntu jammy stable' stack: Description: stack: Archive for codename: jammy components: stable stack: More info: https://download.docker.com/linux/ubuntu stack: Adding repository. stack: Adding deb entry to /etc/apt/sources.list.d/archiveuri-httpsdownloaddockercomlinuxubuntu-jammy.list stack: Adding disabled deb-src entry to /etc/apt/sources.list.d/archiveuri-httpsdownloaddockercomlinuxubuntu-jammy.list stack: + update_apt stack: + apt-get update stack: + DEBIAN_FRONTEND=noninteractive stack: + command apt-get --allow-change-held-packages --allow-downgrades --allow-remove-essential --allow-unauthenticated --option Dpkg::Options::=--force-confdef --option Dpkg::Options::=--force-confold --yes update stack: Hit:1 https://mirrors.edge.kernel.org/ubuntu jammy InRelease stack: Hit:2 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease stack: Hit:3 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease stack: Hit:4 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease stack: Hit:5 https://download.docker.com/linux/ubuntu jammy InRelease stack: Reading package lists... stack: W: https://download.docker.com/linux/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details. stack: + apt-get install --no-install-recommends containerd.io docker-ce docker-ce-cli stack: + DEBIAN_FRONTEND=noninteractive stack: + command apt-get --allow-change-held-packages --allow-downgrades --allow-remove-essential --allow-unauthenticated --option Dpkg::Options::=--force-confdef --option Dpkg::Options::=--force-confold --yes install --no-install-recommends containerd.io docker-ce docker-ce-cli stack: Reading package lists... stack: Building dependency tree... stack: Reading state information... stack: Suggested packages: stack: aufs-tools cgroupfs-mount | cgroup-lite stack: Recommended packages: stack: docker-ce-rootless-extras libltdl7 pigz docker-buildx-plugin stack: docker-compose-plugin stack: The following NEW packages will be installed: stack: containerd.io docker-ce docker-ce-cli stack: 0 upgraded, 3 newly installed, 0 to remove and 29 not upgraded. stack: Need to get 64.5 MB of archives. stack: After this operation, 249 MB of additional disk space will be used. stack: Get:1" }, { "data": "jammy/stable amd64 containerd.io amd64 1.6.24-1 [28.6 MB] stack: Get:2 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-ce-cli amd64 5:24.0.7-1~ubuntu.22.04~jammy [13.3 MB] stack: Get:3 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-ce amd64 5:24.0.7-1~ubuntu.22.04~jammy [22.6 MB] stack: Fetched 64.5 MB in 1s (77.3 MB/s) stack: Selecting previously unselected package containerd.io. (Reading database ... 76025 files and directories currently installed.) stack: Preparing to unpack .../containerd.io1.6.24-1amd64.deb ... stack: Unpacking containerd.io (1.6.24-1) ... stack: Selecting previously unselected package docker-ce-cli. stack: Preparing to unpack .../docker-ce-cli5%3a24.0.7-1~ubuntu.22.04~jammyamd64.deb ... stack: Unpacking docker-ce-cli (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Selecting previously unselected package docker-ce. stack: Preparing to unpack .../docker-ce5%3a24.0.7-1~ubuntu.22.04~jammyamd64.deb ... stack: Unpacking docker-ce (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Setting up containerd.io (1.6.24-1) ... stack: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service /lib/systemd/system/containerd.service. stack: Setting up docker-ce-cli (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Setting up docker-ce (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Created symlink /etc/systemd/system/multi-user.target.wants/docker.service /lib/systemd/system/docker.service. stack: Created symlink /etc/systemd/system/sockets.target.wants/docker.socket /lib/systemd/system/docker.socket. stack: Processing triggers for man-db (2.10.2-1) ... stack: NEEDRESTART-VER: 3.5 stack: NEEDRESTART-KCUR: 5.15.0-86-generic stack: NEEDRESTART-KEXP: 5.15.0-86-generic stack: NEEDRESTART-KSTA: 1 stack: + gpasswd -a vagrant docker stack: Adding user vagrant to group docker stack: + sudo ethtool -K eth1 tx off sg off tso off stack: Actual changes: stack: tx-scatter-gather: off stack: tx-checksum-ip-generic: off stack: tx-generic-segmentation: off [not requested] stack: tx-tcp-segmentation: off stack: tx-tcp-ecn-segmentation: off stack: tx-tcp6-segmentation: off stack: + install_kubectl 1.28.3 stack: + local kubectl_version=1.28.3 stack: + curl -LO https://dl.k8s.io/v1.28.3/bin/linux/amd64/kubectl stack: % Total % Received % Xferd Average Speed Time Time Time Current stack: Dload Upload Total Spent Left Speed 100 138 100 138 0 0 410 0 --:--:-- --:--:-- --:--:-- 410 100 47.5M 100 47.5M 0 0 24.8M 0 0:00:01 0:00:01 --:--:-- 37.9M stack: + chmod +x ./kubectl stack: + mv ./kubectl /usr/local/bin/kubectl stack: + run_helm 192.168.56.4 192.168.56.43 08:00:27:9e:f5:3a /sandbox/stack/ 192.168.56.5 0.4.2 eth1 v5.6.0 stack: + local host_ip=192.168.56.4 stack: + local worker_ip=192.168.56.43 stack: + local worker_mac=08:00:27:9e:f5:3a stack: + local manifests_dir=/sandbox/stack/ stack: + local loadbalancer_ip=192.168.56.5 stack: + local helmchartversion=0.4.2 stack: + local loadbalancer_interface=eth1 stack: + local k3d_version=v5.6.0 stack: + local namespace=tink-system stack: + install_k3d v5.6.0 stack: + local k3d_Version=v5.6.0 stack: + wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh stack: + TAG=v5.6.0 stack: + bash stack: Preparing to install k3d into /usr/local/bin stack: k3d installed into /usr/local/bin/k3d stack: Run 'k3d --help' to see what you can do with it. stack: + start_k3d stack: + k3d cluster create --network host --no-lb --k3s-arg --disable=traefik,servicelb --k3s-arg --kube-apiserver-arg=feature-gates=MixedProtocolLBService=true --host-pid-mode stack: INFO[0000] [SimpleConfig] Hostnetwork selected - disabling injection of docker host into the cluster, server load balancer and setting the api port to the k3s default stack: WARN[0000] No node filter specified stack: WARN[0000] No node filter specified stack: INFO[0000] [ClusterConfig] Hostnetwork selected - disabling injection of docker host into the cluster, server load balancer and setting the api port to the k3s default stack: INFO[0000] Prep: Network stack: INFO[0000] Re-using existing network 'host' (2ecf52da28c15a6bbe026b5e71f3af288fefbbb222b2762bafc29e9b1791ff8b) stack: INFO[0000] Created image volume k3d-k3s-default-images stack: INFO[0000] Starting new tools node... stack: INFO[0001] Creating node 'k3d-k3s-default-server-0' stack: INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.6.0' stack: INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.27.4-k3s1' stack: INFO[0003] Starting Node 'k3d-k3s-default-tools' stack: INFO[0010] Using the k3d-tools node to gather environment information stack: INFO[0011] Starting cluster 'k3s-default' stack: INFO[0011] Starting servers... stack: INFO[0011] Starting Node 'k3d-k3s-default-server-0' stack: INFO[0014] All agents already running. stack: INFO[0014] All helpers already running. stack: INFO[0014] Cluster 'k3s-default' created successfully! stack: INFO[0014] You can now use it like this: stack: kubectl cluster-info stack: + mkdir -p" }, { "data": "stack: + k3d kubeconfig get -a stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: node/k3d-k3s-default-server-0 condition met stack: + install_helm stack: + helm_ver=v3.9.4 stack: + curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 stack: + chmod 700 get_helm.sh stack: + ./get_helm.sh --version v3.9.4 stack: Downloading https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz stack: Verifying checksum... Done. stack: Preparing to install helm into /usr/local/bin stack: helm installed into /usr/local/bin/helm stack: + helminstalltink_stack tink-system 0.4.2 eth1 192.168.56.5 stack: + local namespace=tink-system stack: + local version=0.4.2 stack: + local interface=eth1 stack: + local loadbalancer_ip=192.168.56.5 stack: + trusted_proxies= stack: + '[' '' '!=' '' ']' stack: ++ kubectl get nodes -o 'jsonpath={.items[*].spec.podCIDR}' stack: ++ tr ' ' , stack: + trusted_proxies= stack: + '[' '' '!=' '' ']' stack: ++ kubectl get nodes -o 'jsonpath={.items[*].spec.podCIDR}' stack: ++ tr ' ' , stack: + trusted_proxies=10.42.0.0/24 stack: + '[' 10.42.0.0/24 '!=' '' ']' stack: + helm install tink-stack oci://ghcr.io/tinkerbell/charts/stack --version 0.4.2 --create-namespace --namespace tink-system --wait --set 'smee.trustedProxies={10.42.0.0/24}' --set 'hegel.trustedProxies={10.42.0.0/24}' --set stack.kubevip.interface=eth1 --set stack.relay.sourceInterface=eth1 --set stack.loadBalancerIP=192.168.56.5 --set smee.publicIP=192.168.56.5 stack: NAME: tink-stack stack: LAST DEPLOYED: Tue Oct 31 21:56:58 2023 stack: NAMESPACE: tink-system stack: STATUS: deployed stack: REVISION: 1 stack: TEST SUITE: None stack: + apply_manifests 192.168.56.43 08:00:27:9e:f5:3a /sandbox/stack/ 192.168.56.5 tink-system stack: + local worker_ip=192.168.56.43 stack: + local worker_mac=08:00:27:9e:f5:3a stack: + local manifests_dir=/sandbox/stack/ stack: + local host_ip=192.168.56.5 stack: + local namespace=tink-system stack: + disk_device=/dev/sda stack: + lsblk stack: + grep -q vda stack: + disk_device=/dev/vda stack: + export DISK_DEVICE=/dev/vda stack: + DISK_DEVICE=/dev/vda stack: + export TINKERBELLCLIENTIP=192.168.56.43 stack: + TINKERBELLCLIENTIP=192.168.56.43 stack: + export TINKERBELLCLIENTMAC=08:00:27:9e:f5:3a stack: + TINKERBELLCLIENTMAC=08:00:27:9e:f5:3a stack: + export TINKERBELLHOSTIP=192.168.56.5 stack: + TINKERBELLHOSTIP=192.168.56.5 stack: + for i in \"$manifests_dir\"/{hardware.yaml,template.yaml,workflow.yaml} stack: + envsubst stack: + echo -e stack: + for i in \"$manifests_dir\"/{hardware.yaml,template.yaml,workflow.yaml} stack: + envsubst stack: + echo -e stack: + for i in \"$manifests_dir\"/{hardware.yaml,template.yaml,workflow.yaml} stack: + envsubst stack: + echo -e stack: + kubectl apply -n tink-system -f /tmp/manifests.yaml stack: hardware.tinkerbell.org/machine1 created stack: template.tinkerbell.org/ubuntu-jammy created stack: workflow.tinkerbell.org/sandbox-workflow created stack: + kubectl apply -n tink-system -f /sandbox/stack//ubuntu-download.yaml stack: configmap/download-image created stack: job.batch/download-ubuntu-jammy created stack: + kubectlforvagrant_user stack: + runuser -l vagrant -c 'mkdir -p ~/.kube/' stack: + runuser -l vagrant -c 'k3d kubeconfig get -a > ~/.kube/config' stack: + chmod 600 /home/vagrant/.kube/config stack: + echo 'export KUBECONFIG=\"/home/vagrant/.kube/config\"' stack: all done! stack: + echo 'all done!' ``` Wait for HookOS and Ubuntu image to be downloaded ``` vagrant ssh stack kubectl get jobs -n tink-system --watch exit ``` ``` NAME COMPLETIONS DURATION AGE download-hook 1/1 27s 72s download-ubuntu-jammy 0/1 49s 49s download-ubuntu-jammy 0/1 70s 70s download-ubuntu-jammy 0/1 72s 72s download-ubuntu-jammy 1/1 72s 72s ``` Start the machine to be provisioned ``` vagrant up machine1 ``` ``` Bringing machine 'machine1' up with 'libvirt' provider... ==> machine1: Creating domain with the following" }, { "data": "==> machine1: -- Name: vagrant_machine1 ==> machine1: -- Description: Source: /home/tink/repos/tinkerbell/sandbox/vagrant/Vagrantfile ==> machine1: -- Domain type: kvm ==> machine1: -- Cpus: 2 ==> machine1: -- Feature: acpi ==> machine1: -- Feature: apic ==> machine1: -- Feature: pae ==> machine1: -- Clock offset: utc ==> machine1: -- Memory: 4096M ==> machine1: -- Storage pool: default ==> machine1: -- Disk driver opts: cache='default' ==> machine1: -- Graphics Type: vnc ==> machine1: -- Video Type: cirrus ==> machine1: -- Video VRAM: 16384 ==> machine1: -- Video 3D accel: false ==> machine1: -- Keymap: en-us ==> machine1: -- TPM Backend: passthrough ==> machine1: -- Boot device: hd ==> machine1: -- Boot device: network ==> machine1: -- Disk(vda): /var/lib/libvirt/images/vagrant_machine1-vda.qcow2, virtio, 20G ==> machine1: -- INPUT: type=mouse, bus=ps2 ==> machine1: Starting domain. ==> machine1: Domain launching with graphics connection settings... ==> machine1: -- Graphics Port: 5901 ==> machine1: -- Graphics IP: 0.0.0.0 ==> machine1: -- Graphics Password: Not defined ==> machine1: -- Graphics Websocket: 5701 ``` Watch the provision complete ``` vagrant ssh stack kubectl get -n tink-system workflow sandbox-workflow --watch ``` ``` NAME TEMPLATE STATE sandbox-workflow ubuntu-jammy STATE_PENDING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_SUCCESS ``` Login to the machine The machine has been provisioned with Ubuntu. You can now SSH into the machine. ``` ssh tink@192.168.56.43 # user/pass => tink/tink ``` Follow these steps to create the stack on a Virtualbox VM using Vagrant. Then deploy a VM and provision an OS onto it. Clone this repository ``` git clone https://github.com/tinkerbell/sandbox.git cd sandbox ``` Start the stack ``` cd vagrant vagrant up ``` ``` Bringing machine 'stack' up with 'virtualbox' provider... ==> stack: Importing base box 'generic/ubuntu2204'... ==> stack: Matching MAC address for NAT networking... ==> stack: Checking if box 'generic/ubuntu2204' version '4.1.14' is up to date... ==> stack: Setting the name of the VM: vagrantstack1698780219785_94529 ==> stack: Clearing any previously set network interfaces... ==> stack: Preparing network interfaces based on configuration... stack: Adapter 1: nat stack: Adapter 2: hostonly ==> stack: Forwarding ports... stack: 22 (guest) => 2222 (host) (adapter 1) ==> stack: Running 'pre-boot' VM customizations... ==> stack: Booting VM... ==> stack: Waiting for machine to boot. This may take a few minutes... stack: SSH address: 127.0.0.1:2222 stack: SSH username: vagrant stack: SSH auth method: private key stack: Warning: Connection reset. Retrying... stack: stack: Vagrant insecure key detected. Vagrant will automatically replace stack: this with a newly generated keypair for better security. stack: stack: Inserting generated public key within guest... stack: Removing insecure key from the guest if it's present... stack: Key inserted! Disconnecting and reconnecting using new SSH key... ==> stack: Machine booted and ready! ==> stack: Checking for guest additions in VM... stack: The guest additions on this VM do not match the installed version of stack: VirtualBox! In most cases this is fine, but in rare cases it can stack: prevent things such as shared folders from working" }, { "data": "If you see stack: shared folder errors, please make sure the guest additions within the stack: virtual machine match the version of VirtualBox you have installed on stack: your host and reload your VM. stack: stack: Guest Additions Version: 6.1.38 stack: VirtualBox Version: 7.0 ==> stack: Configuring and enabling network interfaces... ==> stack: Mounting shared folders... stack: /sandbox/stack => ~/tinkerbell/sandbox/vagrant ==> stack: Running provisioner: shell... stack: Running: /var/folders/xt/8w5g0fv54tj4njvjhk025r0000gr/T/vagrant-shell20231031-54683-k09nai.sh stack: + main 192.168.56.4 192.168.56.43 08:00:27:9e:f5:3a /sandbox/stack/ 192.168.56.5 0.4.2 eth1 1.28.3 v5.6.0 '' stack: + local host_ip=192.168.56.4 stack: + local worker_ip=192.168.56.43 stack: + local worker_mac=08:00:27:9e:f5:3a stack: + local manifests_dir=/sandbox/stack/ stack: + local loadbalancer_ip=192.168.56.5 stack: + local helmchartversion=0.4.2 stack: + local loadbalancer_interface=eth1 stack: + local kubectl_version=1.28.3 stack: + local k3d_version=v5.6.0 stack: + update_apt stack: + apt-get update stack: + DEBIAN_FRONTEND=noninteractive stack: + command apt-get --allow-change-held-packages --allow-downgrades --allow-remove-essential --allow-unauthenticated --option Dpkg::Options::=--force-confdef --option Dpkg::Options::=--force-confold --yes update stack: Hit:1 https://mirrors.edge.kernel.org/ubuntu jammy InRelease stack: Get:2 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease [119 kB] stack: Get:3 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease [109 kB] stack: Get:4 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease [110 kB] stack: Get:5 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 Packages [1,148 kB] stack: Get:6 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main Translation-en [245 kB] stack: Get:7 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 c-n-f Metadata [16.1 kB] stack: Get:8 https://mirrors.edge.kernel.org/ubuntu jammy-updates/restricted amd64 Packages [1,103 kB] stack: Get:9 https://mirrors.edge.kernel.org/ubuntu jammy-updates/restricted Translation-en [179 kB] stack: Get:10 https://mirrors.edge.kernel.org/ubuntu jammy-updates/restricted amd64 c-n-f Metadata [536 B] stack: Get:11 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 Packages [998 kB] stack: Get:12 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe Translation-en [218 kB] stack: Get:13 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 c-n-f Metadata [22.0 kB] stack: Get:14 https://mirrors.edge.kernel.org/ubuntu jammy-updates/multiverse amd64 Packages [41.6 kB] stack: Get:15 https://mirrors.edge.kernel.org/ubuntu jammy-updates/multiverse Translation-en [9,768 B] stack: Get:16 https://mirrors.edge.kernel.org/ubuntu jammy-updates/multiverse amd64 c-n-f Metadata [472 B] stack: Get:17 https://mirrors.edge.kernel.org/ubuntu jammy-backports/main amd64 Packages [64.2 kB] stack: Get:18 https://mirrors.edge.kernel.org/ubuntu jammy-backports/main Translation-en [10.5 kB] stack: Get:19 https://mirrors.edge.kernel.org/ubuntu jammy-backports/main amd64 c-n-f Metadata [388 B] stack: Get:20 https://mirrors.edge.kernel.org/ubuntu jammy-backports/universe amd64 Packages [27.8 kB] stack: Get:21 https://mirrors.edge.kernel.org/ubuntu jammy-backports/universe Translation-en [16.4 kB] stack: Get:22 https://mirrors.edge.kernel.org/ubuntu jammy-backports/universe amd64 c-n-f Metadata [644 B] stack: Get:23 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 Packages [938 kB] stack: Get:24 https://mirrors.edge.kernel.org/ubuntu jammy-security/main Translation-en [185 kB] stack: Get:25 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 c-n-f Metadata [11.4 kB] stack: Get:26 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted amd64 Packages [1,079 kB] stack: Get:27 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted Translation-en [175 kB] stack: Get:28 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted amd64 c-n-f Metadata [536 B] stack: Get:29 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe amd64 Packages [796 kB] stack: Get:30 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe Translation-en [146 kB] stack: Get:31 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe amd64 c-n-f Metadata [16.8 kB] stack: Get:32 https://mirrors.edge.kernel.org/ubuntu jammy-security/multiverse amd64 Packages [36.5 kB] stack: Get:33 https://mirrors.edge.kernel.org/ubuntu jammy-security/multiverse Translation-en [7,060 B] stack: Get:34 https://mirrors.edge.kernel.org/ubuntu jammy-security/multiverse amd64 c-n-f Metadata [260 B] stack: Fetched 7,831 kB in 2s (3,321 kB/s) stack: Reading package lists... stack: + install_docker stack: + curl -fsSL https://download.docker.com/linux/ubuntu/gpg stack: + sudo apt-key add - stack: Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). stack: OK stack: ++ lsb_release -cs stack: + add-apt-repository 'deb https://download.docker.com/linux/ubuntu jammy stable' stack: Hit:1 https://mirrors.edge.kernel.org/ubuntu jammy InRelease stack: Hit:2 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease stack: Hit:3 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease stack: Hit:4 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease stack: Get:5 https://download.docker.com/linux/ubuntu jammy InRelease [48.8 kB] stack: Get:6 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages [22.7 kB] stack: Fetched 71.5 kB in 1s (72.5 kB/s) stack: Reading package lists... stack: W: https://download.docker.com/linux/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details. stack: Repository: 'deb https://download.docker.com/linux/ubuntu jammy stable' stack: Description: stack: Archive for codename: jammy components: stable stack: More info: https://download.docker.com/linux/ubuntu stack: Adding repository. stack: Adding deb entry to /etc/apt/sources.list.d/archiveuri-httpsdownloaddockercomlinuxubuntu-jammy.list stack: Adding disabled deb-src entry to" }, { "data": "stack: + update_apt stack: + apt-get update stack: + DEBIAN_FRONTEND=noninteractive stack: + command apt-get --allow-change-held-packages --allow-downgrades --allow-remove-essential --allow-unauthenticated --option Dpkg::Options::=--force-confdef --option Dpkg::Options::=--force-confold --yes update stack: Hit:1 https://download.docker.com/linux/ubuntu jammy InRelease stack: Hit:2 https://mirrors.edge.kernel.org/ubuntu jammy InRelease stack: Hit:3 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease stack: Hit:4 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease stack: Hit:5 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease stack: Reading package lists... stack: W: https://download.docker.com/linux/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details. stack: + apt-get install --no-install-recommends containerd.io docker-ce docker-ce-cli stack: + DEBIAN_FRONTEND=noninteractive stack: + command apt-get --allow-change-held-packages --allow-downgrades --allow-remove-essential --allow-unauthenticated --option Dpkg::Options::=--force-confdef --option Dpkg::Options::=--force-confold --yes install --no-install-recommends containerd.io docker-ce docker-ce-cli stack: Reading package lists... stack: Building dependency tree... stack: Reading state information... stack: Suggested packages: stack: aufs-tools cgroupfs-mount | cgroup-lite stack: Recommended packages: stack: docker-ce-rootless-extras libltdl7 pigz docker-buildx-plugin stack: docker-compose-plugin stack: The following NEW packages will be installed: stack: containerd.io docker-ce docker-ce-cli stack: 0 upgraded, 3 newly installed, 0 to remove and 195 not upgraded. stack: Need to get 64.5 MB of archives. stack: After this operation, 249 MB of additional disk space will be used. stack: Get:1 https://download.docker.com/linux/ubuntu jammy/stable amd64 containerd.io amd64 1.6.24-1 [28.6 MB] stack: Get:2 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-ce-cli amd64 5:24.0.7-1~ubuntu.22.04~jammy [13.3 MB] stack: Get:3 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-ce amd64 5:24.0.7-1~ubuntu.22.04~jammy [22.6 MB] stack: Fetched 64.5 MB in 1s (53.8 MB/s) stack: Selecting previously unselected package containerd.io. (Reading database ... 75348 files and directories currently installed.) stack: Preparing to unpack .../containerd.io1.6.24-1amd64.deb ... stack: Unpacking containerd.io (1.6.24-1) ... stack: Selecting previously unselected package docker-ce-cli. stack: Preparing to unpack .../docker-ce-cli5%3a24.0.7-1~ubuntu.22.04~jammyamd64.deb ... stack: Unpacking docker-ce-cli (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Selecting previously unselected package docker-ce. stack: Preparing to unpack .../docker-ce5%3a24.0.7-1~ubuntu.22.04~jammyamd64.deb ... stack: Unpacking docker-ce (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Setting up containerd.io (1.6.24-1) ... stack: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service /lib/systemd/system/containerd.service. stack: Setting up docker-ce-cli (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Setting up docker-ce (5:24.0.7-1~ubuntu.22.04~jammy) ... stack: Created symlink /etc/systemd/system/multi-user.target.wants/docker.service /lib/systemd/system/docker.service. stack: Created symlink /etc/systemd/system/sockets.target.wants/docker.socket /lib/systemd/system/docker.socket. stack: Processing triggers for man-db (2.10.2-1) ... stack: NEEDRESTART-VER: 3.5 stack: NEEDRESTART-KCUR: 5.15.0-48-generic stack: NEEDRESTART-KEXP: 5.15.0-48-generic stack: NEEDRESTART-KSTA: 1 stack: + gpasswd -a vagrant docker stack: Adding user vagrant to group docker stack: + sudo ethtool -K eth1 tx off sg off tso off stack: Actual changes: stack: tx-scatter-gather: off stack: tx-checksum-ip-generic: off stack: tx-generic-segmentation: off [not requested] stack: tx-tcp-segmentation: off stack: + install_kubectl 1.28.3 stack: + local kubectl_version=1.28.3 stack: + curl -LO https://dl.k8s.io/v1.28.3/bin/linux/amd64/kubectl stack: % Total % Received % Xferd Average Speed Time Time Time Current stack: Dload Upload Total Spent Left Speed 100 138 100 138 0 0 242 0 --:--:-- --:--:-- --:--:-- 242 100 47.5M 100 47.5M 0 0 21.3M 0 0:00:02 0:00:02 --:--:-- 31.6M stack: + chmod +x ./kubectl stack: + mv ./kubectl /usr/local/bin/kubectl stack: + run_helm 192.168.56.4 192.168.56.43 08:00:27:9e:f5:3a /sandbox/stack/ 192.168.56.5 0.4.2 eth1 v5.6.0 stack: + local host_ip=192.168.56.4 stack: + local worker_ip=192.168.56.43 stack: + local worker_mac=08:00:27:9e:f5:3a stack: + local manifests_dir=/sandbox/stack/ stack: + local loadbalancer_ip=192.168.56.5 stack: + local helmchartversion=0.4.2 stack: + local loadbalancer_interface=eth1 stack: + local k3d_version=v5.6.0 stack: + local namespace=tink-system stack: + install_k3d v5.6.0 stack: + local k3d_Version=v5.6.0 stack: + wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh stack: + TAG=v5.6.0 stack: + bash stack: Preparing to install k3d into /usr/local/bin stack: k3d installed into /usr/local/bin/k3d stack: Run 'k3d --help' to see what you can do with" }, { "data": "stack: + start_k3d stack: + k3d cluster create --network host --no-lb --k3s-arg --disable=traefik,servicelb --k3s-arg --kube-apiserver-arg=feature-gates=MixedProtocolLBService=true --host-pid-mode stack: INFO[0000] [SimpleConfig] Hostnetwork selected - disabling injection of docker host into the cluster, server load balancer and setting the api port to the k3s default stack: WARN[0000] No node filter specified stack: WARN[0000] No node filter specified stack: INFO[0000] [ClusterConfig] Hostnetwork selected - disabling injection of docker host into the cluster, server load balancer and setting the api port to the k3s default stack: INFO[0000] Prep: Network stack: INFO[0000] Re-using existing network 'host' (0dfc7dbbdde7db0b7a7a5eba280e71248bb0cf010603bfaa0a0a09928df8d555) stack: INFO[0000] Created image volume k3d-k3s-default-images stack: INFO[0000] Starting new tools node... stack: INFO[0001] Creating node 'k3d-k3s-default-server-0' stack: INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.6.0' stack: INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.27.4-k3s1' stack: INFO[0002] Starting Node 'k3d-k3s-default-tools' stack: INFO[0008] Using the k3d-tools node to gather environment information stack: INFO[0008] Starting cluster 'k3s-default' stack: INFO[0008] Starting servers... stack: INFO[0008] Starting Node 'k3d-k3s-default-server-0' stack: INFO[0013] All agents already running. stack: INFO[0013] All helpers already running. stack: INFO[0013] Cluster 'k3s-default' created successfully! stack: INFO[0013] You can now use it like this: stack: kubectl cluster-info stack: + mkdir -p /root/.kube/ stack: + k3d kubeconfig get -a stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: error: no matching resources found stack: + sleep 1 stack: + kubectl wait --for=condition=Ready nodes --all --timeout=600s stack: node/k3d-k3s-default-server-0 condition met stack: + install_helm stack: + helm_ver=v3.9.4 stack: + curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 stack: + chmod 700 get_helm.sh stack: + ./get_helm.sh --version v3.9.4 stack: Downloading https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz stack: Verifying checksum... Done. stack: Preparing to install helm into /usr/local/bin stack: helm installed into /usr/local/bin/helm stack: + helminstalltink_stack tink-system 0.4.2 eth1 192.168.56.5 stack: + local namespace=tink-system stack: + local version=0.4.2 stack: + local interface=eth1 stack: + local loadbalancer_ip=192.168.56.5 stack: + trusted_proxies= stack: + '[' '' '!=' '' ']' stack: ++ tr ' ' , stack: ++ kubectl get nodes -o 'jsonpath={.items[*].spec.podCIDR}' stack: + trusted_proxies= stack: + '[' '' '!=' '' ']' stack: + trusted_proxies= stack: + '[' '' '!=' '' ']' stack: ++ kubectl get nodes -o 'jsonpath={.items[*].spec.podCIDR}' stack: ++ tr ' ' , stack: + trusted_proxies=10.42.0.0/24 stack: + '[' 10.42.0.0/24 '!=' '' ']' stack: + helm install tink-stack oci://ghcr.io/tinkerbell/charts/stack --version 0.4.2 --create-namespace --namespace tink-system --wait --set 'smee.trustedProxies={10.42.0.0/24}' --set 'hegel.trustedProxies={10.42.0.0/24}' --set stack.kubevip.interface=eth1 --set stack.relay.sourceInterface=eth1 --set stack.loadBalancerIP=192.168.56.5 --set smee.publicIP=192.168.56.5 stack: NAME: tink-stack stack: LAST DEPLOYED: Tue Oct 31 19:25:06 2023 stack: NAMESPACE: tink-system stack: STATUS: deployed stack: REVISION: 1 stack: TEST SUITE: None stack: + apply_manifests 192.168.56.43 08:00:27:9e:f5:3a /sandbox/stack/ 192.168.56.5 tink-system stack: + local worker_ip=192.168.56.43 stack: + local worker_mac=08:00:27:9e:f5:3a stack: + local manifests_dir=/sandbox/stack/ stack: + local host_ip=192.168.56.5 stack: + local namespace=tink-system stack: + disk_device=/dev/sda stack: + lsblk stack: + grep -q vda stack: + export DISK_DEVICE=/dev/sda stack: + DISK_DEVICE=/dev/sda stack: + export TINKERBELLCLIENTIP=192.168.56.43 stack: + TINKERBELLCLIENTIP=192.168.56.43 stack: + export TINKERBELLCLIENTMAC=08:00:27:9e:f5:3a stack: + TINKERBELLCLIENTMAC=08:00:27:9e:f5:3a stack: + export TINKERBELLHOSTIP=192.168.56.5 stack: + TINKERBELLHOSTIP=192.168.56.5 stack: + for i in \"$manifests_dir\"/{hardware.yaml,template.yaml,workflow.yaml} stack: + envsubst stack: + echo -e stack: + for i in \"$manifests_dir\"/{hardware.yaml,template.yaml,workflow.yaml} stack: + envsubst stack: + echo -e stack: + for i in" }, { "data": "stack: + envsubst stack: + echo -e stack: + kubectl apply -n tink-system -f /tmp/manifests.yaml stack: hardware.tinkerbell.org/machine1 created stack: template.tinkerbell.org/ubuntu-jammy created stack: workflow.tinkerbell.org/sandbox-workflow created stack: + kubectl apply -n tink-system -f /sandbox/stack//ubuntu-download.yaml stack: configmap/download-image created stack: job.batch/download-ubuntu-jammy created stack: + kubectlforvagrant_user stack: + runuser -l vagrant -c 'mkdir -p ~/.kube/' stack: + runuser -l vagrant -c 'k3d kubeconfig get -a > ~/.kube/config' stack: + chmod 600 /home/vagrant/.kube/config stack: + echo 'export KUBECONFIG=\"/home/vagrant/.kube/config\"' stack: all done! stack: + echo 'all done!' ``` Wait for HookOS and Ubuntu image to be downloaded ``` vagrant ssh stack kubectl get jobs -n tink-system --watch exit ``` ``` NAME COMPLETIONS DURATION AGE download-hook 1/1 27s 72s download-ubuntu-jammy 0/1 49s 49s download-ubuntu-jammy 0/1 70s 70s download-ubuntu-jammy 0/1 72s 72s download-ubuntu-jammy 1/1 72s 72s ``` Start the machine to be provisioned ``` vagrant up machine1 ``` ``` Bringing machine 'machine1' up with 'virtualbox' provider... ==> machine1: Importing base box 'jtyr/pxe'... ==> machine1: Matching MAC address for NAT networking... ==> machine1: Checking if box 'jtyr/pxe' version '2' is up to date... ==> machine1: Setting the name of the VM: vagrantmachine11626365105119_9800 ==> machine1: Fixed port collision for 22 => 2222. Now on port 2200. ==> machine1: Clearing any previously set network interfaces... ==> machine1: Preparing network interfaces based on configuration... machine1: Adapter 1: hostonly ==> machine1: Forwarding ports... machine1: 22 (guest) => 2200 (host) (adapter 1) machine1: VirtualBox adapter #1 not configured as \"NAT\". Skipping port machine1: forwards on this adapter. ==> machine1: Running 'pre-boot' VM customizations... ==> machine1: Booting VM... ==> machine1: Waiting for machine to boot. This may take a few minutes... machine1: SSH address: 127.0.0.1:22 machine1: SSH username: vagrant machine1: SSH auth method: private key machine1: Warning: Authentication failure. Retrying... Timed out while waiting for the machine to boot. This means that Vagrant was unable to communicate with the guest machine within the configured (\"config.vm.boot_timeout\" value) time period. If you look above, you should be able to see the error(s) that Vagrant had when attempting to connect to the machine. These errors are usually good hints as to what may be wrong. If you're using a custom box, make sure that networking is properly working and you're able to connect to the machine. It is a common problem that networking isn't setup properly in these boxes. Verify that authentication configurations are also setup properly, as well. If the box appears to be booting properly, you may want to increase the timeout (\"config.vm.boot_timeout\") value. ``` Watch the provision complete ``` vagrant ssh stack kubectl get -n tink-system workflow sandbox-workflow --watch ``` ``` NAME TEMPLATE STATE sandbox-workflow ubuntu-jammy STATE_PENDING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_RUNNING sandbox-workflow ubuntu-jammy STATE_SUCCESS ``` Login to the machine The machine has been provisioned with Ubuntu. You can now SSH into the machine. ``` ssh tink@192.168.56.43 # user/pass => tink/tink ``` Need a little help getting started? Were here! 2021 The Linux Foundation. All right reserved The Linux Foundation has registered trademarks and uses trademarks. For a list trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "SuperEdge", "subcategory": "Automation & Configuration" }
[ { "data": "At present, many edge computing container open source projects have a default prerequisite for use: users need to prepare a standard or specific tool to build a Kubernetes cluster in advance, and then use specific tools or other methods to deploy in the cluster deployment corresponding components to experience edge capabilities. This undoubtedly raises the threshold of user experience edge capabilities, and there are many restrictions on use, making it difficult for users to get started. Simply organize, there will probably be the following problems: The threshold is too high Too restrictive Adding edge nodes is more troublesome Poor automation In response to the above problems, in order to lower the threshold for users to use the edge Kubernetes cluster and make the edge Kubernetes cluster capable of production, we designed a one-click solution to deploy an edge Kubernetes cluster, completely shielding the installation details, so that users can have a zero-threshold experience Edge capacity. One-click Two kinds of installation creation Support online installation Can be used in production Zero learning cost We studied the source code of Kubeadm and found that we can borrow Kubeadm to create native Kubernetes clusters, join nodes, and workflow ideas to deploy edge Kubernetes clusters with one click, and perform the installation steps step by step. This is exactly what we want for a simple, flexible, and low learning cost deployment solution. So we stood on the shoulders of giants, used Kubedams ideas, reused Kubeadms source code, and designed the following solution. Among them, the part of Kubeadm init cluster/join node completely reuses the source code of kubadm, and all the logic is exactly the same as Kubeadm. This program has the following advantages: Fully compatible with Kubeadm We just stand on the shoulders of Kubeadm, set some configuration parameters required by the edge cluster before Kubeadm init/join, initialize the Master or Node nodes automatically, and install the container runtime. After the completion of Kubeadm init/join, the CNI network plug-in was installed and the corresponding edge capability components were deployed. We quoted the Kubeadm source code in the way of Go Mod. During the whole process, we did not modify the source code of Kubeadm one line. It is completely native and ready to upgrade to a higher version of Kubeadm in the future. One-click, easy to use, flexible and automated The edgeadm init cluster and join node completely retain the original parameters and process of Kubeadm init/join, but automatically initialize the node and install the container when running, you can use the edgeadm --enable-edge=fasle parameter to install the native one-click For Kubernetes clusters, you can also use the edgeadm --enable-edge=true parameter to install an edge Kubernetes cluster with one click. You can join any node as long as you can access the node where the Kube-apiserver is located, or you can join the master. Join master also continues the Kubeadm approach. To build highly available nodes, you can directly use join master to expand Master nodes when needed to achieve high" }, { "data": "No learning cost, exactly the same as using kubeadm Because the Kubeadm init cluster/join node part completely reuses the source code of kubadm, all logic is exactly the same as Kubeadm, completely retaining the usage habits of kubeadm and all flag parameters, and the usage is exactly the same as that of kubeadm, without any new learning cost , The user can customize the edge Kubernetes cluster according to the parameters of Kubeadm or use kubeadm.config. Edge node security enhancement With the help of Kubernetes Node Authentication mechanism, we have enabled NodeRestriction access plugin to ensure that each node has a unique identity and only has a minimal set of permissions. Even if an edge node is compromised, other edge nodes cannot be operated. For Kubelet, we also enable the Kubelet configuration certificate rotation mechanism by default. When the Kubelet certificate is about to expire, a new secret key will be automatically generated , And apply for a new certificate from the Kubernetes API. Once the new certificate is available, it will be used to authenticate the connection with the Kubernetes API. Follow kubeadms minimum requirements, master && node minimum 2C2G, disk space Not less than 1G; Warning: Provide clean machines as much as possible to avoid installation errors caused by other factors. If there is a container service on the machine, it may be cleaned up during the installation process, please confirm carefully before executing it Currently supports amd64 and arm64 two systems; Other systems can compile edgeadm and make corresponding system installation packages by themselves, please refer to 5. Customize Kubernetes static installation package Supported Kubernetes version: greater than or equal to v1.18, the provided installation package only provides Kubernetes v1.18.2 version; For other Kubernetes versions, please refer to 5. Customize the Kubernetes static installation package and make it yourself. Choose installation package according to your installation node CPU architecture [amd64, arm64] ``` arch=amd64 version=v0.3.0 && rm -rf edgeadm-linux- && wget https://superedge-1253687700.cos.ap-guangzhou.myqcloud.com/$version/$arch/edgeadm-linux-$arch-$version.tgz && tar -xzvf edgeadm-linux- && cd edgeadm-linux-$arch-$version && ./edgeadm ``` The installation package is about 200M. For detailed information about the installation package, please refer to 5. Custom Kubernetes static installation package. If downloading the installation package is slow, you can directly check the corresponding SuperEdge version, download edgeadm-linux-amd64/arm64-*.0.tgz, and Decompression is the same. One-click installation of the edge independent Kubernetes cluster function is supported starting from SuperEdge-v0.3.0-beta.0, pay attention to ``` ./edgeadm init --kubernetes-version=1.18.2 --image-repository superedge.tencentcloudcr.com/superedge --service-cidr=10.96.0.0/12 --pod-network-cidr=192.168.0.0/16 --install-pkg-path ./kube-linux-*.tar.gz --apiserver-cert-extra-sans=<Master public IP> --apiserver-advertise-address=<master Intranet IP> --enable-edge=true -v=6 ``` On enable-edge=true: Whether to deploy edge capability components, the default is true enable-edge=false means to install a native Kubernetes cluster, which is exactly the same as the cluster built by kubeadm; install-pkg-path: The address of the Kubernetes static installation package The value of install-pkg-path can be the path on the machine or the network address (for example: http://xxx/xxx/kube-linux-arm64/amd64-*.tar.gz, which can be encrypted without wget You can), pay attention to use the Kubernetes static installation package that matches the machine system; apiserver-cert-extra-sans: kube-apiserver certificate extension address image-repository: image repository address If superedge.tencentcloudcr.com/superedge is slower, you can switch to other accelerated mirror warehouses, as long as you can pull down kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, etcd, pause, etc. mirrors. Other parameters have the same meaning as Kubeadm and can be configured according to kubeadms requirements. You can also use kubeadm.config to configure the original parameters of kubeadm, and create an edge Kubernetes cluster through edgeadm init --config kubeadm.config --install-pkg-path ./kube-linux-*.tar.gz . If there is no problem during execution and the cluster is successfully initialized, the following content will be output: ``` Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the" }, { "data": "Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: edgeadm join xxx.xxx.xxx.xxx:xxx --token xxxx \\ --discovery-token-ca-cert-hash sha256:xxxxxxxxxx --install-pkg-path <Path of edgeadm kube-* install package> ``` If there is a problem during the execution, the corresponding error message will be returned directly and the initialization of the cluster will be interrupted. You can use the ./edgeadm reset command to roll back the initialization operation of the cluster. To enable non-root users to run kubectl, run the following commands, which are also part of the edgeadm init output: ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` If you are the root user, you can run: ``` export KUBECONFIG=/etc/kubernetes/admin.conf ``` Note that the ./edgeadm join command that saves the output of ./edgeadm init will be used when adding node nodes later. The validity period of the token is the same as kubeadm 24h, after expiration, you can use ./edgeadm token create to create a new token. The value generation of discovery-token-ca-cert-hash is also the same as kubeadm, which can be generated by executing the following command on the master node. ``` openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ``` Execute <2>. Download the edgeadm static installation package on the edge node, or upload the edgeadm static installation package to the edge node by other means, and then execute the following command: ``` ./edgeadm join <Master public/Intranet IP or domain>:Port --token xxxx \\ --discovery-token-ca-cert-hash sha256:xxxxxxxxxx --install-pkg-path <edgeadm Kube-*Static installation package address/FTP path> --enable-edge=true ``` On: <Master public/Intranet IP or domain>: the address where the node accesses the Kube-apiserver service. You can change the address of the Kube-apiserver service prompted by the edgeadm init to the node to be replaced by Master node public network IP/Master node internal network IP/domain name depending on the situation, depending on whether you want the node to access Kube through the external network or the internal network -apiserver service. enable-edge=true: Whether the added node is used as an edge node (whether to deploy edge capability components), the default is true enable-edge=false means join the native Kubernetes cluster node, which is exactly the same as the node joined by kubeadm; If there are no exceptions in the execution process, the new node successfully joins the cluster, and the following will be output: ``` This node has joined the cluster: Certificate signing request was sent to apiserver and a response was received. The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. ``` If there is a problem during the execution, the corresponding error message will be returned directly, and the addition of the node will be interrupted. You can use the ./edgeadm reset command to roll back the operation of joining the node and rejoin. Tip: If the edge node is joined, after the edge node joins successfully, the edge node will be labeled with a label: superedge.io/edge-node=enable, which is convenient for subsequent applications to use nodeSelector to select the application and schedule to the edge node; Native Kubernetes nodes, like kubeadms join, do not do" }, { "data": "Install Haproxy on the load balancing machine as the main entrance of the cluster: Note: Replace in the configuration file ``` global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend main *:5000 acl urlstatic pathbeg -i /static /images /javascript /stylesheets acl urlstatic pathend -i .jpg .gif .png .css .js usebackend static if urlstatic default_backend app frontend kubernetes-apiserver mode tcp bind *:16443 option tcplog default_backend kubernetes-apiserver backend kubernetes-apiserver mode tcp balance roundrobin server master-0 <master VIP>:6443 check # Here replace the master VIP with the user's own VIP backend static balance roundrobin server static 127.0.0.1:4331 check backend app balance roundrobin server app1 127.0.0.1:5001 check server app2 127.0.0.1:5002 check server app3 127.0.0.1:5003 check server app4 127.0.0.1:5004 check EOF ``` If the cluster has two masters, install Keepalived on both masters and perform the same operation: Note: Replace in the configuration file In the keepalived.conf configuration file below, <masters local public network IP> and <another masters public network IP> are opposite in the configuration of the two masters. Dont fill in the error. ``` ! Configuration File for keepalived global_defs { smtpconnecttimeout 30 routerid LVSDEVELEDGE1 } vrrp_script checkhaproxy{ script \"/etc/keepalived/do_sth.sh\" interval 5 } vrrpinstance VI1 { state BACKUP interface eth0 nopreempt virtualrouterid 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass aaa } virtual_ipaddress { <master VIP> # Here replace the master VIP with the user's own VIP } unicastsrcip <master Public IP> unicast_peer { <Public IP of other master nodes> } notifymaster \"/etc/keepalived/notifyaction.sh MASTER\" notifybackup \"/etc/keepalived/notifyaction.sh BACKUP\" notifyfault \"/etc/keepalived/notifyaction.sh FAULT\" notifystop \"/etc/keepalived/notifyaction.sh STOP\" garpmasterdelay 1 garpmasterrefresh 5 track_interface { eth0 } track_script { checkhaproxy } } EOF ``` Perform cluster initialization operations in one of the masters ``` ./edgeadm init --control-plane-endpoint <Master VIP> --upload-certs --kubernetes-version=1.18.2 --image-repository superedge.tencentcloudcr.com/superedge --service-cidr=10.96.0.0/12 --pod-network-cidr=192.168.0.0/16 --apiserver-cert-extra-sans=<Domain or Public/Intranet IP of Master node> --install-pkg-path <edgeadm Kube-*Static installation package address/FTP path> -v=6 ``` The meaning of the parameters is the same as 3. Use edgeadm to install edge Kubernetes cluster, and others are the same as kubeadm, so I wont explain it here; If there are no exceptions during execution and the cluster is successfully initialized, the following content will be output: ``` Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: edgeadm join xxx.xxx.xxx.xxx:xxx --token xxxx \\ --discovery-token-ca-cert-hash sha256:xxxxxxxxxx \\ --control-plane --certificate-key xxxxxxxxxx --install-pkg-path <Path of edgeadm kube-* install package> Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use \"edgeadm init phase upload-certs --upload-certs\" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: edgeadm join" }, { "data": "--token xxxx \\ --discovery-token-ca-cert-hash sha256:xxxxxxxxxx --install-pkg-path <Path of edgeadm kube-* install package> ``` If there is a problem during the execution, the corresponding error message will be directly returned, and the initialization of the cluster will be interrupted. Use the ./edgeadm reset command to roll back the initialization operation of the cluster. To enable non-root users to run kubectl, run the following commands, which are also part of the edgeadm init output: ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` If you are the root user, you can run: ``` export KUBECONFIG=/etc/kubernetes/admin.conf ``` Pay attention to the ./edgeadm join command that saves the output of ./edgeadm init, which is needed to add Master node and edge node later. Record the ./edgeadm join command output by ./edgeadm init. You need this command to add the Master node and the edge node. Execute the ./edgeadm join command on another master ``` ./edgeadm join xxx.xxx.xxx.xxx:xxx --token xxxx \\ --discovery-token-ca-cert-hash sha256:xxxxxxxxxx \\ --control-plane --certificate-key xxxxxxxxxx \\ --install-pkg-path <Path of edgeadm kube-* install package> ``` If there are no exceptions in the execution process, the new master successfully joins the cluster, and the following content will be output: ``` This node has joined the cluster and a new control plane instance was created: Certificate signing request was sent to apiserver and approval was received. The Kubelet was informed of the new secure connection details. Control plane (master) label and taint were applied to the new node. The Kubernetes control plane instances scaled up. A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. ``` If there is a problem during the execution, the corresponding error message will be directly returned, and the addition of the node will be interrupted. Use the ./edgeadm reset command to roll back the initialization of the cluster. ``` ./edgeadm join xxx.xxx.xxx.xxx:xxxx --token xxxx \\ --discovery-token-ca-cert-hash sha256:xxxxxxxxxx --install-pkg-path <Path of edgeadm kube-* install package> ``` If there are no exceptions in the execution process, the new master successfully joins the cluster, and the following content will be output: ``` This node has joined the cluster: Certificate signing request was sent to apiserver and a response was received. The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. ``` If there are exceptions during execution, the corresponding error message will be directly returned, and the initialization of the cluster will be interrupted. Use the ./edgeadm reset command to roll back the initialization operation of the cluster. ``` kube-linux-arm64-v1.18.2.tar.gz ## Kubernetes static installation package for kube-v1.18.2 bin ## Binary directory conntrack ## Binary file for connection tracking kubectl ## kubectl for kube-v1.18.2 kubelet ## kubelet for kube-v1.18.2 lite-apiserver ## The corresponding version of lite-apiserver cni ## cin configuration cni-plugins-linux-v0.8.3.tar.gz ## CNI plug-in binary compression package of v0.8.3 container ## Container runtime directory docker-19.03-linux-arm64.tar.gz ## Docker 19.03 arm64 system installation script and installation package ``` There are two things you need to do to customize other Kubernetes versions: To customize the Kubernetes static installation package and other systems, three things need to be done: Was this page helpful? Glad to hear from you! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Updatecli", "subcategory": "Automation & Configuration" }
[ { "data": "Updatecli is a command-line tool used to apply update pipeline. Updatecli is a command-line tool used to define and apply update strategies. It reads a manifest then works into three stages: Deciding how, when, where to update information is hard. Nowadays they are countless tools that can apply continuous delivery or continuous deployment. To configure our infrastructure, we write ansible playbooks, puppet manifest, helm chart, etc. We heavily rely on configuration files to specify the version we need to install. Unfortunately too often those files are manually updated. Because its hard to automatically detect what information must be updated and when. The logic that manipulates information from a configuration file is defined outside that configuration file. Information comes from different sources like maven, docker, files, git repository, etc. Before modifying information, we probably want to validate some assumptions. Updatecli allows combining blocks, aka plugins, to specify what information needs to be updated, when, and where. So we easily implement the workflow that suits our needs. One page summary of how to use updatecli Quick Start Understand how updatecli core concept works. Core Understand how to combine the different plugins to define an update pipeline that suits your need. Plugins Understand how to use updatecli from your CI environment to apply updates. CI Find out how to contribute to Updatecli. Contributing Get some help. Help" } ]
{ "category": "Provisioning", "file_name": "what-is-ecr.html_pg=ln&sec=hs.md", "project_name": "Amazon Elastic Container Registry (ECR)", "subcategory": "Container Registry" }
[ { "data": "Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry service that is secure, scalable, and reliable. Amazon ECR supports private repositories with resource-based permissions using AWS IAM. This is so that specified users or Amazon EC2 instances can access your container repositories and images. You can use your preferred CLI to push, pull, and manage Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts. Amazon ECR supports public container image repositories as well. For more information, see What is Amazon ECR Public in the Amazon ECR Public User Guide. The AWS container services team maintains a public roadmap on GitHub. It contains information about what the teams are working on and allows all AWS customers the ability to give direct feedback. For more information, see AWS Containers Roadmap. Amazon ECR contains the following components: An Amazon ECR private registry is provided to each AWS account; you can create one or more repositories in your registry and store Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts in them. For more information, see Amazon ECR private registry. Your client must authenticate to an Amazon ECR private registry as an AWS user before it can push and pull images. For more information, see Private registry authentication in Amazon ECR. An Amazon ECR repository contains your Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts. For more information, see Amazon ECR private repositories. You can control access to your repositories and the contents within them with repository policies. For more information, see Private repository policies in Amazon ECR. You can push and pull container images to your repositories. You can use these images locally on your development system, or you can use them in Amazon ECS task definitions and Amazon EKS pod specifications. For more information, see Using Amazon ECR images with Amazon ECS and Using Amazon ECR Images with Amazon EKS. Amazon ECR provides the following features: Lifecycle policies help with managing the lifecycle of the images in your repositories. You define rules that result in the cleaning up of unused images. You can test rules before applying them to your repository. For more information, see Automate the cleanup of images by using lifecycle policies in Amazon ECR. Image scanning helps in identifying software vulnerabilities in your container images. Each repository can be configured to scan on" }, { "data": "This ensures that each new image pushed to the repository is scanned. You can then retrieve the results of the image scan. For more information, see Scan images for software vulnerabilities in Amazon ECR. Cross-Region and cross-account replication makes it easier for you to have your images where you need them. This is configured as a registry setting and is on a per-Region basis. For more information, see Private registry settings in Amazon ECR. Pull through cache rules provide a way to cache repositories in an upstream registry in your private Amazon ECR registry. Using a pull through cache rule, Amazon ECR will periodically reach out to the upstream registry to ensure the cached image in your Amazon ECR private registry is up to date. For more information, see Sync an upstream registry with an Amazon ECR private registry. If you are using Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS), note that the setup for those two services is similar to the setup for Amazon ECR because Amazon ECR is an extension of both services. When using the AWS Command Line Interface with Amazon ECR, use a version of the AWS CLI that supports the latest Amazon ECR features. If you don't see support for an Amazon ECR feature in the AWS CLI, upgrade to the latest version of the AWS CLI. For information about installing the latest version of the AWS CLI, see Install or update to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. To learn how to push a container image to a private Amazon ECR repository using the AWS CLI and Docker, see Moving an image through its lifecycle in Amazon ECR. With Amazon ECR, you only pay for the amount of data you store in your repositories and for the data transfer from your image pushes and pulls. For more information, see Amazon ECR pricing. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better." } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "Distribution", "subcategory": "Container Registry" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Distribution", "subcategory": "Container Registry" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "#milestones.md", "project_name": "Dragonfly", "subcategory": "Container Registry" }
[ { "data": "Dragonfly is an open-source p2p-based image and file Distribution System. It is designed to improve the efficiency and speed of large-scale file distribution. It is widely used in the fields of application distribution, cache distribution, log distribution and image distribution. At this stage, Dragonfly has evolved based on Dragonfly1.x. On the basis of maintaining the original core capabilities of Dragonfly1.x, Dragonfly It has been comprehensively upgraded in major features such as system architecture design, product capabilities, and usage scenarios. Dragonfly provides a one-stop solution for large-scale file distribution. The basic capabilities provided by Dragonfly include: Dragonfly has been selected and put into production use by many Internet companies since its open source in 2017, and entered CNCF in October 2018, becoming the third project in China to enter the CNCF Sandbox. In April 2020, CNCF TOC voted to accept Dragonfly as an CNCF Incubating project. Dragonfly has developed the next version through production practice, which has absorbed the advantages of Dragonfly1.x and made a lot of optimizations for known problems. Dragonfly has unparalleled advantages in large-scale file distribution. Dragonfly introduces many new features: New Architecture Dragonfly is composed of four parts: Manager, Scheduler, Seed Peer and Peer. Dfdaemon can be used as seed peer and peer. The independence of Scheduler and The decoupling of scheduler and seed peer eliminates the mutual influence between scheduling and storage IO. At the same time, it supports seed peer plugin and can be deployed on demand. In addition, the whole system is based on the GRPC framework which greatly improves the distribution efficiency of P2P. More Application Scenarios Dragonfly supports different types of storage sources, such as HDFS, OSS, NAS, etc. Product Capability Dragonfly supports configuration management, data visualization, etc. through the management and control system, making the system easier to use. Dragonfly includes four parts Manager, Scheduler, Seed Peer and Peer, refer to Architecture." } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Distribution", "subcategory": "Container Registry" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "Distribution", "subcategory": "Container Registry" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Dragonfly", "subcategory": "Container Registry" }
[ { "data": "Dragonfly is an file distribution and image acceleration based on p2p technology. It is designed to increase the efficiency of large-scale data distribution and improve idle bandwidth usage of peer. It is widely used in various domains such as image acceleration, file distribution, AI model distribution, AI dataset distribution, etc. Here are some of the features that Dragonfly offers: Dragonfly 1.x has been open source in November 2017 and used in production environments by many companies. And joined the CNCF as a sandbox project in October 2018. In April 2020, The CNCF Technical Oversight Committee (TOC) voted to accept Dragonfly as an Incubating Project. In April 2021, Dragonfly 2.0 was released after architectural optimization and code refactoring. Dragonfly services could be divided into four categories: Manager, Scheduler, Seed Peer and Peer. Below is the Dragonfly architecture diagram. You can find more detailed architecture docs in Architecture. When downloading an image or file, the download request is proxied to Dragonfly via the Peer HTTP Proxy. Peer will first register the Task with the Scheduler, and the Scheduler will check the Task metadata to determine whether the Task is downloaded for the first time in the P2P cluster. If this is the first time downloading, the Seed Peer will be triggered to download back-to-source, and the Task will be divided based on the piece level. After successful registration, The peer establishes a connection to the scheduler based on this task, and then schedule the Seed Peer to the Peer for streaming based on piece level. when a piece is successfully downloaded, the piece metadata will be reported to the Scheduler for next scheduling. If this is not the first time downloading, the Scheduler will schedule other Peers for the download. The Peer will download pieces from different Peers, splices and returns the entire file, then the P2P download is completed." } ]
{ "category": "Provisioning", "file_name": "2.0.0.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor 2.0 Documentation Harbor version 2.0.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor Welcome to the Harbor 2.0.x documentation. This documentation includes all of the information that you need to install, configure, and use Harbor. This section describes how to install Harbor and perform the required initial configuration. These day 1 operations are performed by the Harbor Administrator. Read more This section describes how to use and maintain your Harbor registry instance after deployment. These day 2 operations are performed by the Harbor Administrator. Read more This section describes how users with the developer, master, and project administrator roles manage users, and create, configure, and participate in Harbor projects. Read more This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project. Read more The source files for this documentation set are located in the Harbor repository on Github. For versions of the docs before 2.0.x, go to the docs folder in the Github repository and select the appropriate release-1.xx.x branch. On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "1.10.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor Installation and Configuration Harbor version 2.11.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor This section describes how to perform a new installation of Harbor. If you are upgrading from a previous version of Harbor, you might need to update the configuration file and migrate your data to fit the database schema of the later version. For information about upgrading, see Upgrading Harbor. Before you install Harbor, you can test the latest version of Harbor on a demo environment maintained by the Harbor team. For information, see Test Harbor with the Demo Server. Harbor supports integration with different 3rd-party replication adapters for replicating data, OIDC adapters for authN/authZ, and scanner adapters for vulnerability scanning of container images. For information about the supported adapters, see the Harbor Compatibility List. The standard Harbor installation process involves the following stages: If installation fails, see Troubleshooting Harbor Installation. You can also use Helm to install Harbor on a Kubernetes cluster, to make Harbor highly available. For information about installing Harbor with Helm on a Kubernetes cluster, see Deploying Harbor with High Availability via Helm. For information about how to manage your deployed Harbor instance, see Reconfigure Harbor and Manage the Harbor Lifecycle. By default, Harbor uses its own private key and certificate to authenticate with Docker. For information about how to optionally customize your configuration to use your own key and certificate, see Customize the Harbor Token Service. After installation, log into your Harbor via the web console to configure the instance under configuration. Harbor also provides a command line interface (CLI) that allows you to Configure Harbor System Settings at the Command Line. The table below lists the some of the key components that are deployed when you deploy Harbor. | Component | Version | |:--|:-| | Postgresql | 14.10 | | Redis | 7.2.2 | | Beego | 2.0.6 | | Distribution/Distribution | 2.8.3 | | Helm | 2.9.1 | | Swagger-ui | 5.9.1 | On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "2.4.0.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor 2.6 Documentation Harbor version 2.6.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor Welcome to the Harbor 2.6.x documentation. This documentation includes all of the information that you need to install, configure, and use Harbor. This section describes how to install Harbor and perform the required initial configuration. These day 1 operations are performed by the Harbor Administrator. Read more This section describes how to use and maintain your Harbor registry instance after deployment. These day 2 operations are performed by the Harbor Administrator. Read more This section describes how users with the developer, maintainer, and project administrator roles manage users, and create, configure, and participate in Harbor projects. Read more This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project. Read more The source files for this documentation set are located in the Harbor repository on Github. For the previous versions of the docs, go to the docs folder in the Github repository and select the appropriate release-X.Y.Z branch. On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "2.10.0.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor 2.10 Documentation Harbor version 2.10.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor Welcome to the Harbor 2.10.x documentation. This documentation includes all of the information that you need to install, configure, and use Harbor. This section describes how to install Harbor and perform the required initial configuration. These day 1 operations are performed by the Harbor Administrator. Read more This section describes how to use and maintain your Harbor registry instance after deployment. These day 2 operations are performed by the Harbor Administrator. Read more This section describes how users with the developer, maintainer, and project administrator roles manage users, and create, configure, and participate in Harbor projects. Read more This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project. Read more The source files for this documentation set are located in the Harbor repository on Github. For the previous versions of the docs, go to the docs folder in the Github repository and select the appropriate release-X.Y.Z branch. On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "2.1.0.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor 2.1 Documentation Harbor version 2.1.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor Welcome to the Harbor 2.1.x documentation. This documentation includes all of the information that you need to install, configure, and use Harbor. This section describes how to install Harbor and perform the required initial configuration. These day 1 operations are performed by the Harbor Administrator. Read more This section describes how to use and maintain your Harbor registry instance after deployment. These day 2 operations are performed by the Harbor Administrator. Read more This section describes how users with the developer, maintainer, and project administrator roles manage users, and create, configure, and participate in Harbor projects. Read more This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project. Read more The source files for this documentation set are located in the Harbor repository on Github. For the previous versions of the docs, go to the docs folder in the Github repository and select the appropriate release-xxx branch. On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "2.6.0.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor 2.6 Documentation Harbor version 2.6.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor Welcome to the Harbor 2.6.x documentation. This documentation includes all of the information that you need to install, configure, and use Harbor. This section describes how to install Harbor and perform the required initial configuration. These day 1 operations are performed by the Harbor Administrator. Read more This section describes how to use and maintain your Harbor registry instance after deployment. These day 2 operations are performed by the Harbor Administrator. Read more This section describes how users with the developer, maintainer, and project administrator roles manage users, and create, configure, and participate in Harbor projects. Read more This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project. Read more The source files for this documentation set are located in the Harbor repository on Github. For the previous versions of the docs, go to the docs folder in the Github repository and select the appropriate release-X.Y.Z branch. On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "administration.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor Administration Harbor version 2.11.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor This section describes how to configure and maintain Harbor after deployment. These operations are performed by the Harbor system administrator. The Harbor system administrator performs global configuration operations that apply to the whole Harbor instance. The operations that are performed by the Harbor system administrator are the following. Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "2.5.0.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor 2.5 Documentation Harbor version 2.5.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor Welcome to the Harbor 2.5.x documentation. This documentation includes all of the information that you need to install, configure, and use Harbor. This section describes how to install Harbor and perform the required initial configuration. These day 1 operations are performed by the Harbor Administrator. Read more This section describes how to use and maintain your Harbor registry instance after deployment. These day 2 operations are performed by the Harbor Administrator. Read more This section describes how users with the developer, maintainer, and project administrator roles manage users, and create, configure, and participate in Harbor projects. Read more This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project. Read more The source files for this documentation set are located in the Harbor repository on Github. For the previous versions of the docs, go to the docs folder in the Github repository and select the appropriate release-X.Y.Z branch. On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "FAQ.html.md", "project_name": "Portus", "subcategory": "Container Registry" }
[ { "data": "Portus is an open source authorization service and user interface for your on-premise Docker registry. Portus will add a fine-grained set of permissions on top of your registry in order to make access more secure and controlled. Moreover, Portus offers a web UI on top of your registry that will give you a clear overview of the images and tags that are stored in your registry. Portus offers tons of important features like: ... and so much more. Take a look at this page and have fun with all the possibilities! When we started this project, we already had some images on the Docker Hub, and we enjoyed using it. That being said, soon enough we realized the problems that Docker Hub entails: Fortunately for us, Docker has a project called Distribution that addressed one of our biggest concerns: being able to deploy an on-premise Docker registry that takes care of storing and distributing your private Docker images. Docker Distribution was designed with the UNIX principle in mind of \"do one thing and do it well\". For this reason, Distribution only takes care of storing and distributing your images, and offers an API so services can be built on top of it. There are two main aspects of said API: With this in mind, we started Portus to address all of our concerns in regards to distributing images inside of an organization, while providing a clear user interface. Moreover, we released Portus as free software. We did that because: One of the most common problems when deploying Portus is failing at configuring SSL. We are sure that, at this point, if you are having problems with SSL it is not because of a bug in either Portus or the Docker registry, but rather: We are not going to lie: the deployment of Portus can be a daunting task considering the amount of moving pieces and the complexity of some of these pieces. For this reason, it's easy to have errors on your deployment due to some missing step or some misunderstanding. Because of this, we have had quite some questions on this regard, or invalid bug reports. If you are facing any problem in Portus, consider these steps:" } ]
{ "category": "Provisioning", "file_name": "2.8.0.md", "project_name": "Harbor", "subcategory": "Container Registry" }
[ { "data": "Docs Harbor 2.5 Documentation Harbor version 2.5.0 Harbor Installation and Configuration Harbor Administration Working with Projects Building, Customizing, and Contributing to Harbor Welcome to the Harbor 2.5.x documentation. This documentation includes all of the information that you need to install, configure, and use Harbor. This section describes how to install Harbor and perform the required initial configuration. These day 1 operations are performed by the Harbor Administrator. Read more This section describes how to use and maintain your Harbor registry instance after deployment. These day 2 operations are performed by the Harbor Administrator. Read more This section describes how users with the developer, maintainer, and project administrator roles manage users, and create, configure, and participate in Harbor projects. Read more This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project. Read more The source files for this documentation set are located in the Harbor repository on Github. For the previous versions of the docs, go to the docs folder in the Github repository and select the appropriate release-X.Y.Z branch. On this page Contributing Harbor Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "JFrog Artifactory", "subcategory": "Container Registry" }
[ { "data": "2024 JFrog Ltd All Rights Reserved SCA Software Composition Analysis for source code and binary files Contextual Analysis Deep Contextual Analysis combining real-world exploitability and CVEs applicability Secrets Secrets Detection for source code and binary files Infrastructure as Code (IaC) Identify security exposures in your IaC SAST Discover vulnerabilities in the 1st party code Last updated 26 days ago IDE CLI Frogbot CI SDKs" } ]
{ "category": "Provisioning", "file_name": "Configuring-Portus.html.md", "project_name": "Portus", "subcategory": "Container Registry" }
[ { "data": "The Docker Registry is a service that can talk to the docker daemon in order to upload and download docker images. Since version 2, the docker registry is called Distribution, and you can find the documentation here. There are multiple ways to deploy your own private registry. This page explains how to do this. From a deployment point of view, the only thing important for Portus is that it should be reachable. Note that this will be checked when adding your registry into Portus database, as explained here. Once you have your registry in place, you need to configure it. This can be done either through the /etc/registry/config.yml file or through environment variables. For convenience, we will assume that you have access to the config.yml file. This is a config example: ``` version: 0.1 loglevel: debug storage: filesystem: rootdirectory: /var/lib/docker-registry delete: enabled: true http: addr: :5000 tls: certificate: /etc/nginx/ssl/my.registry.crt key: /etc/nginx/ssl/my.registry.key auth: token: realm: https://my.portus/v2/token service: my.registry:5000 issuer: my.portus rootcertbundle: /etc/nginx/ssl/my.registry.crt notifications: endpoints: name: portus url: https://my.portus/v2/webhooks/events timeout: 500ms threshold: 5 backoff: 1s``` Some things to note: From now on, we will suppose that a registry has already been created. The next generation of Docker registries (those based on v2.0 or higher) push the authorization of requests to an external authorization service. In our case, this external authorization service will be Portus. If you would like to have a more clear picture about this, take a look at this explanation from Dockers documentation. When Portus receives an authorization request, it gets the following information: If Portus decides that the user is authorized to perform the action, then it sends a JWT token suitable for the docker registry being targeted. You can read all the details about the format of this JWT token here. As stated in the previous section, the JWT token is used to handle the authentication between Portus and your private registry. There are some considerations in regards to this token. First of all, it needs the machine_fqdn secret to be set. You can find this in the config/secrets.yml file. If you change this file you should restart Portus afterwards. Note that in production you can just provide the PORTUSMACHINEFQDN environment variable. Another thing to consider is the expiration time of the token itself. By default it expires in 5 minutes. However, its possible that the image to be uploaded is too big or the communication is too slow; and the upload can take more than 5 minutes. If this happens, then the upload will be cancelled from the registrys side, and it will fail. This is a known issue, and from Portus side we provide this workaround. As explained in this page, Portus is able to synchronize the contents of the registry and its database. In this regard, there are some considerations to be made. First of all, note that no synchronization will be made until the admin sets up the registry in Portus database. This is better explained in this page. Moreover, in order for this to happen, Portus needs the portus user to exist. This is done automatically in containerized deployments. That being said, if this is not your case, you have to create it after migrating the database by performing: ``` $ rake db:seed ``` or ``` $ rake portus:createapiaccount ``` Note that neither of these commands will work if you have not set the portus_password secret value in the config/secrets.yml file. This value can be set on production with the environment variable PORTUS_PASSWORD." } ]
{ "category": "Provisioning", "file_name": "release-schedule.html.md", "project_name": "Portus", "subcategory": "Container Registry" }
[ { "data": "We plan to have an even shorter development cycle for the 2.5 release. The 2.4 release didnt take as long as other releases, but we still think that we can do faster and smaller iterations. The plan for this release is: Our plan for the 2.5 release can bee seen here." } ]
{ "category": "Provisioning", "file_name": "migrate-from-rpm.html.md", "project_name": "Portus", "subcategory": "Container Registry" }
[ { "data": "During the development cycle of the 2.3 release, we started to focus more and more on containerized deployments. The official openSUSE Docker image got more attention and it started to be thinner and easier to deploy. Following current trends, we decided to make these kinds of deployments the preferred ones in the 2.3 release. The migration path from a pure RPM installation to a containerized one is not that big. Thats because the Docker image simply installs the RPM as produced in our OBS project. So from the distribution point of view (and the tooling) nothing changes: the only change is to go from a bare metal installation to Docker containers. Prior to anything, we should stop Portus, which means stopping both the Portus Web UI, and the Portus crono service. On Version 2.2, and older versions, the Web UI is configured as a virtual host in apache2. Thus, in order to stop it, you need to disable that configuration. You can do that by running: ``` sudo mv /etc/apache2/vhosts.d/portus.conf /etc/apache2/vhosts.d/portus.conf.disabled ``` Having disabled the vhost configuration, you can stop the crono service by running: ``` sudo systemctl stop portus-crono ``` Once you have portus stopped, you can proceed to back up the data. After stopping portus, you should proceed as you would with any upgrade: back up your data. There are two main things you should back up: images stored in the registry and the database. The registry can store Docker images in remote locations with the support for Amazon S3, Microsoft Azure, etc. That being said, you can also store these images locally with the following configuration: ``` storage: filesystem: rootdirectory: /my/location ``` This configuration is stored in /etc/registry/config.yml, which was auto-generated if you used the portusctl setup command for setting up your RPM installation. So, now you have to back up the /my/location location as pointed out on the above example. Finally, you should back up the data stored on the MySQL/MariaDB instance. At this point you can deploy Portus with Docker images. We maintain some examples that use docker-compose here that might serve as inspiration. These examples are a convenient way of running a similar plain docker command like: ``` $ docker run -d -v <path-to-certs>:/certificates:ro -p 3000:3000 <list-of-env-variables> opensuse/portus:2.3 ``` Moreover, if you are using Kubernetes, you might also be interested in the Helm Chart developed here. Regardless of your deployment method, make sure to read some tips that we have written here. This will help you when configuring your deployment methods. Once you have the new portus container running, it is time to clean up by removing the old Portus RPM. You can do so by running: ``` zypper rm --clean-deps portus ``` The clean-deps option will remove dependencies that are not needed for any other package. This could be the case of rubygem-passenger-apache2. If you are unsure of this, run the previous command without the clean-deps option." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "ORY Hydra", "subcategory": "Key Management" }
[ { "data": "You can stream events (sign-ups, logins, machine-to-machine tokens issued, and many more) in real-time, live as they happen in your Ory Network project, to your own infrastructure. Pipe those events into your own data warehouse, data lake, or flavor of choice, and use them to power your own analytics, dashboards, data science, and more. Live event streams are available for Ory Network enterprise contracts. Talk to your account manager or reach out directly to find out more. You workload is not running on AWS or you don't want to use SNS? Reach out to discuss your requirements! Configuring AWS SNS as an event stream destination is easy and requires no exchange of confidential information. ``` arn:aws:sns:us-east-1:123456789012:my-topic``` ``` { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"OryNetworkEventStreamPublish\", \"Effect\": \"Allow\", \"Action\": [\"sns:Publish\"], \"Resource\": [\"<YOUR TOPIC ARN>\"] } ]}``` Record the ARN of the IAM role you created, for example: ``` arn:aws:iam::123456789012:role/ory-network-event-streamer``` ``` { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"601538168777\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\": { \"sts:ExternalId\": \"<YOUR PROJECT UUID>\" } } } ]}``` This allows Ory Network to assume the role in your AWS account, and publish to your SNS topic. ``` ory create event-stream --project \"$YOURPROJECTID\" \\ --type sns \\ --aws-sns-topic-arn \"$YOURTOPICARN\" \\ --aws-iam-role-arn \"$YOURIAMROLE_ARN\"``` For development purposes, you can subscribe an email address to your topic, and receive events via email. For production use, subscribe AWS SQS, AWS Kinesis Data Firehose, or any other AWS service that can consume events from an SNS topic. Check the AWS documentation for ideas. If your event stream destination is unavailable or misconfigured, Ory Network will retry sending the event multiple times with an exponential backoff between attempts. The most flexible and scalable way to manage identities, authentication, authorization and access control.Explore Ory Network ->" } ]
{ "category": "Provisioning", "file_name": "portusctl.html.md", "project_name": "Portus", "subcategory": "Container Registry" }
[ { "data": "Every production-ready deployment of Portus must have a process/container called background which performs some needed tasks for the normal operation of Portus. If you are running bare metal, you can simply run this process from the source code: ``` $ bundle exec rails r bin/background.rb ``` That being said, unless you are in a development environment, you wont have to perform that command. Instead, if you are using the official Docker image, you will have to set the following environment variable: PORTUS_BACKGROUND=true. This has been already set in the examples that we provide. As documented above, the background process consists of some tasks that have to be performed in order to have Portus running properly. These tasks are described in the sections below. As explained in this section, Portus keeps track of the events sent by the Registry itself. This way, in real-time Portus keeps track of images/tags that have been pushed/deleted. Before this implementation, this was done synchronously, which led into some blocking issues. This task can be disabled as described here, but it is highly discouraged to do so. All Docker registries provide an API in which any client can fetch some information. Portus makes a heavy use of this API, and in this case it fetches the catalog of Docker images/tags. This is done periodically, and it will update the database when needed. Just like the other tasks, this task can be disabled, but we recommend tuning the strategy option as described here instead. For that, you have to consider when do you think this synchronization has to be performed, and what should be its reach. In this case you have three possible scenarios: Regardless of our recommendations, we suggest you to go to the section of the documentation where we describe all options. Note: this was done by the crono process before this new implementation. The old behavior of the old crono process corresponds to the actual update-delete strategy. If you have security scanning enabled, then this process will also fetch vulnerabilities so it can be used later by the user interface." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Pinniped", "subcategory": "Key Management" }
[ { "data": "Pinniped is an authentication service for Kubernetes clusters. As a Kubernetes cluster administrator or user, you can learn how Pinniped works, see how to use it on your clusters, and dive into internals of Pinnipeds APIs and architecture. Have a question, comment, or idea? Please reach out via GitHub Issues, GitHub Discussions, or join the Pinniped community. Dive into the overall design and implementation details of Pinniped. See how the Pinniped Supervisor streamlines login to multiple Kubernetes clusters. See how the Pinniped Concierge works to provide a uniform login flow across different Kubernetes clusters. See how the Pinniped Supervisor can work directly with the Kube API server to provide authentication to Kubernetes clusters. Download and set up the pinniped command-line tool on macOS, Linux, or Windows clients. Install the Pinniped Concierge service in a Kubernetes cluster. Install the Pinniped Supervisor service in a Kubernetes cluster. Logging into your Kubernetes cluster using Pinniped for authentication. Using Pinniped for CI/CD cluster operations. Allow your Kubernetes cluster users to authenticate into web apps using the same identities. Set up JSON Web Token (JWT) based token authentication on an individual Kubernetes cluster. Set up JSON Web Token (JWT) based token authentication on an individual Kubernetes cluster using the Pinniped Supervisor as the OIDC provider. Set up webhook-based token authentication on an individual Kubernetes cluster. Set up the Pinniped Supervisor to provide seamless login flows across multiple clusters. Learn how to use one or more identity providers, and identity transformations and policies, on a FederationDomain. Set up the Pinniped Supervisor to use Auth0 login. Set up the Pinniped Supervisor to use Azure Active Directory login. Set up the Pinniped Supervisor to use Dex login. Set up the Pinniped Supervisor to use Miscrosoft Entra ID to login. Set up the Pinniped Supervisor to use GitHub as an identity provider. Set up the Pinniped Supervisor to use Okta login. Set up the Pinniped Supervisor to use Workspace ONE Access login. Set up the Pinniped Supervisor to use GitLab login. Set up the Pinniped Supervisor to use OpenLDAP login. Set up the Pinniped Supervisor to use JumpCloud LDAP Set up the Pinniped Supervisor to use Microsoft Active Directory See the default configuration values for the ActiveDirectoryIdentityProvider. See the supported cluster types for the Pinniped Concierge. Reference for the pinniped command-line tool Reference for FIPS builds of Pinniped binaries Reference for the *.pinniped.dev Kubernetes API groups. A brief overview of the Pinniped source code. This website does not use cookies or other tracking technology. 2024 Pinniped Authors. Apache 2.0 License. A VMware-backed project. Terms of Use | Privacy Policy | Your California Privacy Rights" } ]
{ "category": "Provisioning", "file_name": "quickstart.md", "project_name": "Pomerium", "subcategory": "Key Management" }
[ { "data": "Run Pomerium Core with Docker containers in under 5 minutes. The Core quickstart uses Pomerium's Hosted Authenticate Service, but you can also configure a self-hosted authenticate service to integrate with Pomerium. Docker and Docker Compose Create a config.yaml file in the root of your project. Add the configuration below to config.yaml: ``` Replace user@example.com with your email address. Create a docker-compose.yaml file in the root of your project. Add the configuration below to docker-compose.yaml: ``` version: \"3\"services: pomerium: image: cr.pomerium.com/pomerium/pomerium:latest volumes: ## Mount your config file: https://www.pomerium.com/docs/reference/ - ./config.yaml:/pomerium/config.yaml:ro ports: - 443:443 ## https://verify.localhost.pomerium.io --> Pomerium --> http://verify verify: image: cr.pomerium.com/pomerium/verify:latest expose: - 8000``` ``` docker compose up``` Access the verify route you built in your policy: https://verify.localhost.pomerium.io If you get a self-signed certificate warning, see Handle Self-Signed Certificate Warning to bypass it. You should be redirected to the verify service. You'll see a page like this: Although identity verification failed, you successfully integrated Pomerium with the upstream verify service. Because this guide doesn't include a signing key in the configuration, identity verification will fail. See Identity Verification for more information on how Pomerium can use JWTs for authentication. If you want to try Enterprise, check out the Enterprise with Docker quickstart. If you want to try connecting Pomerium with other services, see some of our Guides. Did you finish this quickstart guide? We'd love to hear what you think. Get in touch with us on our Discuss forum, message us on Twitter, LinkedIn, or check out our Community page. This is a test environment! If you followed all the steps in this doc your Pomerium environment is not using trusted certificates. Remember to use a valid certificate solution before moving this configuration to a production environment. See Certificates for more information. Your email is safe with us" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Pomerium", "subcategory": "Key Management" }
[ { "data": "Pomerium builds secure, clientless connections to internal web apps and services without a corporate VPN. Pomerium is: Its not a VPN alternative its the trusted, foolproof way to protect your business. Learn how Pomerium secures your apps and services in this 2-minute demo. Learn how Pomerium simplifies access control by providing clientless access to users within your organization. Learn what Continuous Verification is, how it works with Pomerium, and why it's important for building a Zero Trust Architecture. For a full list of features, see the capabilities sidebar. Your email is safe with us" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "SPIFFE", "subcategory": "Key Management" }
[ { "data": "The easiest, most secure way to access and protect your infrastructure Teleport Access On-demand, least privileged access, on a foundation of cryptographic identity and zero trust Teleport Identity Harden your infrastructure with Identity governance and security Teleport Policy Unify and control access policies across all of your infrastructure Modernizing Secure Access to Infrastructure Download White Paper Works with everything you have. Supported Protocols & Resource Types SSH, Kubernetes, Databases, Web Apps, Windows, Cloud Explore Integrations Access to your clouds, data centers, and everything in them. More than 170 integrations. What is Identity-Native Infrastructure Access? Download Book Why customers adopt Teleport Improve Engineer & Workforce Productivity Access to the infrastructure engineers need, when they need it Protect Infrastructure from Identity-Based Attacks Remove secrets and standing privileges as attack surfaces Meet Compliance Requirements FedRAMP, SOC 2, HIPAA, PCI, ISO 27001 Kubernetes in the Enterprise Download Report Industries with infrastructure access complexity E-Commerce & Entertainment Securing access at scale Financial Services Preventing breaches and maintaining customer trust Software-as-a-Service (SaaS) Providers Access control for growth and governance What is Identity-Native Infrastructure Access? Download Book Meet regulatory requirements for access control FedRAMP SOC 2 HIPAA Modernizing Access to Mitigate Security Risk & Speed Threat Response Feb 15 @ 9AM PT Register now Strategic relationships that enhance customer value Amazon Web Services (AWS) Control access to your critical AWS resources. Managing Multi-Account AWS Console and CLI Access with Teleport Watch Webinar Technical resources Documentation How It Works Tech Papers Tutorials Security Get hands-on experience with Teleport Try Teleport For Free Teleport Labs Teleport Connect Expert perspectives Blog Podcasts Webinars Introducing Teleport 15 Feb 1, 2024 What's new at Teleport News Blog Careers About Find out more Events Customers Partners Teleport Academy No More Backdoors: Know Who Has Access to What, Right Now June 13, 2024 Register Today Teleport Teleport Workload Identity is currently in Preview. We are actively working on improving the feature and would love to hear your feedback. We currently do not recommend using this feature in production. Teleport's Workload Identity issues flexible short-lived identities intended for workloads. The term workload refers generally to pieces of software running within your system. These identities can be used to secure workload to workload communication (e.g mTLS) or to allow these workloads to access third party systems. It is compatible with the industry-standard Secure Production Identity Framework For Everyone (SPIFFE) Specification meaning that it can be used in place of other SPIFFE compatible identity providers. Teleport Workload Identity brings a number of Teleport features you are already familiar with, such as: Teleport Workload Identity is different from Teleport Machine Identity in that it is intended for workload to workload communication and is not intended to grant access to the Teleport cluster itself. Workload Identity does not leverage the Teleport Proxy. SPIFFE (Secure Production Identity Framework For Everyone) is a set of standards for securely identifying workloads. SPIFFE sets out: The open nature and popularity of SPIFFE make it a great choice as a foundation for a full workload identity implementation. It is supported as an identity provider by a number of popular tools (such as Linkerd and Istio) and off-the-shelf SDKs exist for implementing SPIFFE directly into your own" }, { "data": "It's important to recognize that SPIFFE does not specify how to use SPIFFE IDs for authorization. This gives a high level of flexibility, allowing you to implement authorization in a way that suits you. The basis of identity in SPIFFE is the SPIFFE ID. This is a unique string that identifies a workload. The SPIFFE ID is formatted as a URI with a scheme of spiffe and contains a trust domain and a workload identifier. The trust domain is the \"root of trust\" for your workload identities. Workloads within the trust domain are issued identities by authorities within the trust domain, and using the root keys of the trust domain, it is possible to validate these identities. The trust domain is encoded as the host within the URI. For Teleport Workload Identity, the trust domain is your Teleport cluster, and this is represented by the name configured for the cluster, e.g example.teleport.sh. The workload identifier is encoded in the URI as the path. This should be a string that identifies your workload within the trust domain. What you include within this path is up to you and your application's requirements. Typically, the hierarchical nature of the path is leveraged. For example, you had the service foo operating in the europe region, you may wish to represent this as: /region/europe/svc/foo. Together, this produces a SPIFFE ID that looks like: ``` spiffe://example.teleport.sh/region/europe/svc/foo ``` The SPIFFE ID may be a unique identifier for a workload, but provides no way for a workload to verifiably prove its identity. This is where the Secure Verifiable Identity Documents (SVIDs) come in. The SVID is a document that encodes the SPIFFE ID and a cryptographic proof which allows the SVID to be verified as issued by a trusted authority. SPIFFE sets out two formats for SVIDs: The data needed by a workload to verify a SVID is known as the trust bundle. This is a set of certificates belonging to the trusted authorities within the trust domain. The Workload API is a standardized gRPC API that workloads should use to request SVIDs and trust bundles from a SPIFFE identity provider. The Workload API server also handles automatically renewing the credentials for subscribed workloads. The Workload API is usually exposed by an agent that is installed on the same host as the workloads and is accessed using a unix socket rather than a TCP endpoint. It can perform basic authentication and authorization of the workload before issuing SVIDs. This is known as Workload Attestation. Teleport's Workload Identity is an implementation of SPIFFE. Each Teleport cluster acts as a SPIFFE trust domain, with the Auth Service as a certificate authority for issuing SVIDs. Teleport's RBAC system is used to control which Bots and Users are able to request a SVID for a given SPIFFE ID. Roles can specify which SPIFFE IDs can be issued and this role is then granted to the Bot or User. For example: ``` kind: role version: v6 metadata: name: europe-foo-svid-issuer spec: allow: spiffe: path: \"/region/europe/svc/foo\" ``` The SPIFFE Workload API is implemented as a configurable service within the tbot agent. The tbot agent should be installed close to the workloads that need to request SVIDs, and they can then use the Workload API exposed by tbot to fetch SVIDs and Trust Bundles. Teleport's Workload Identity currently only supports issuing X.509-SVIDs. Was this page helpful?" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "sso", "subcategory": "Key Management" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "sso", "subcategory": "Key Management" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "sso", "subcategory": "Key Management" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "sso", "subcategory": "Key Management" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | adr | adr | adr | nan | nan | | architecture | architecture | architecture | nan | nan | | diagrams | diagrams | diagrams | nan | nan | | img | img | img | nan | nan | | API.md | API.md | API.md | nan | nan | | generaterequestsignature.md | generaterequestsignature.md | generaterequestsignature.md | nan | nan | | googleprovidersetup.md | googleprovidersetup.md | googleprovidersetup.md | nan | nan | | oktaprovidersetup.md | oktaprovidersetup.md | oktaprovidersetup.md | nan | nan | | quickstart.md | quickstart.md | quickstart.md | nan | nan | | ssoauthenticatorconfig.md | ssoauthenticatorconfig.md | ssoauthenticatorconfig.md | nan | nan | | ssoconfig.md | ssoconfig.md | sso_config.md | nan | nan | | ssoproxyconfig.md | ssoproxyconfig.md | ssoproxyconfig.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Provisioning", "file_name": "api-docs.md", "project_name": "Vault", "subcategory": "Key Management" }
[ { "data": "Vault has an HTTP API that can be used to control every aspect of Vault. The Vault HTTP API gives you full access to Vault using REST like HTTP verbs. Every aspect of Vault can be controlled using the APIs. The Vault CLI uses the HTTP API to access Vault similar to all other consumers. All API routes are prefixed with /v1/. This documentation is only for the v1 API, which is currently the only version. Backwards compatibility: At the current version, Vault does not yet promise backwards compatibility even with the v1 prefix. We'll remove this warning when this policy changes. At this point in time the core API (that is, sys/ routes) change very infrequently, but various secrets engines/auth methods/etc. sometimes have minor changes to accommodate new features as they're developed. The API is expected to be accessed over a TLS connection at all times, with a valid certificate that is verified by a well-behaved client. It is possible to disable TLS verification for listeners, however, so API clients should expect to have to do both depending on user settings. Once Vault is unsealed, almost every other operation requires a client token. A user may have a client token sent to them. The client token must be sent as either the X-Vault-Token HTTP Header or as Authorization HTTP Header using the Bearer <token> scheme. Otherwise, a client token can be retrieved using an authentication engine. Each auth method has one or more unauthenticated login endpoints. These endpoints can be reached without any authentication, and are used for authentication to Vault itself. These endpoints are specific to each auth method. Responses from auth login methods that generate an authentication token are sent back to the client in JSON. The resulting token should be saved on the client or passed via the X-Vault-Token or Authorization header for future requests. Several Vault APIs require specifying path parameters. The path parameter cannot end in periods. Otherwise, Vault will return a 404 unsupported path error. When using Namespaces the final path of the API request is relative to the X-Vault-Namespace header. For instance, if a request URI is secret/foo with the X-Vault-Namespace header set as ns1/ns2/, then the resulting request path to Vault will be ns1/ns2/secret/foo. Note that it is semantically equivalent to use the full path rather than the X-Vault-Namespace header, Vault will match the corresponding namespace based on correlating user input. Similar path results may be achieved if X-Vault-Namespace is set to ns1/ with the request path of ns2/secret/foo as well, or otherwise if X-Vault-Namespace is omitted entirely and instead a complete path is provided such as: ns1/ns2/secret/foo. For example, the following two commands result in equivalent requests: ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ -H \"X-Vault-Namespace: ns1/ns2/\" \\ -X GET \\ http://127.0.0.1:8200/v1/secret/foo``` ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ -X GET \\ http://127.0.0.1:8200/v1/ns1/ns2/secret/foo``` Typically the request data, body and response data to and from Vault is in" }, { "data": "Vault sets the Content-Type header appropriately with its response and does not require it from the clients request. The demonstration below uses the KVv1 secrets engine, which is a simple Key Value store. Please read the API documentation of KV secret engines for details of KVv1 compared to KVv2 and how they differ in their URI paths as well as the features available in version 2 of the KV secrets engine. For KVv1, reading a secret using the HTTP API is done by issuing a GET: ``` /v1/secret/foo``` This maps to secret/foo where foo is the key in the secret/ mount, which is mounted by default on a fresh Vault install and is of type kv. Here is an example of reading a secret using cURL: ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ -X GET \\ http://127.0.0.1:8200/v1/secret/foo``` A few endpoints consume calls with GET query string parameters, but only if those parameters are not sensitive, especially since some load balancers will be able log these. Most endpoints that accept POST query string parameters expect those parameters in the request body. You can list secrets as well. To do this, either issue a GET with the query string parameter list=true, or you use the LIST HTTP verb. For the kv secrets engine, listing is allowed on directories only, which returns the keys at the requested path: ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ -X LIST \\ http://127.0.0.1:8200/v1/secret/``` The API documentation uses LIST as the HTTP verb, but you can still use GET with the ?list=true query string. To make an API with specific data in request body, issue a POST: ``` /v1/secret/foo``` with a JSON body like: ``` { \"value\": \"bar\"}``` Here is an example of writing a secret using cURL: ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ -H \"Content-Type: application/json\" \\ -X POST \\ -d '{\"data\":{\"value\":\"bar\"}}' \\ http://127.0.0.1:8200/v1/secret/baz``` Vault currently considers PUT and POST to be synonyms. Rather than trust a client's stated intentions, Vault engines can implement an existence check to discover whether an operation is actually a create or update operation based on the data already stored within Vault. This makes permission management via ACLs more flexible. A KVv2 example for the engine path of secret requires that URI is appended with data/ prior to the secret name (baz) such as: ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ -H \"Content-Type: application/json\" \\ -X POST \\ -d '{\"data\":{\"value\":\"bar\"}}' \\ http://127.0.0.1:8200/v1/secret/data/baz``` For more examples, please look at the Vault API client. Requests that are sent to a Vault Proxy that is configured to use the requirerequestheader option must include the X-Vault-Request header entry, e.g.: ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ -H \"X-Vault-Request: true\" \\ -H \"Content-Type: application/json\" \\ -X POST \\ -d '{\"value\":\"bar\"}' \\ http://127.0.0.1:8200/v1/secret/baz``` The Vault CLI always adds this header to every request, regardless of whether the request is being sent to a Vault Agent or directly to a Vault Server. In addition, the Vault SDK always adds this header to every" }, { "data": "To retrieve the help for any API within Vault, including mounted engines, auth methods, etc. then append ?help=1 to any URL. If you have valid permission to access the path, then the help text will be returned as a markdown-formatted block in the help attribute of the response. Additionally, with the OpenAPI generation in Vault, you will get back a small OpenAPI document in the openapi attribute. This document is relevant for the path you're looking up and any paths under it - also note paths in the OpenAPI document are relative to the initial path queried. Example request: ``` $ curl \\ -H \"X-Vault-Token: f3b09679-3001-009d-2b80-9c306ab81aa6\" \\ http://127.0.0.1:8200/v1/secret?help=1``` Example response: ``` { \"help\": \"## DESCRIPTION\\n\\nThis backend provides a versioned key-value store. The kv backend reads and\\nwrites arbitrary secrets to the storage backend. The secrets are\\nencrypted/decrypted by Vault: they are never stored unencrypted in the backend\\nand the backend never has an opportunity to see the unencrypted value. Each key\\ncan have a configured number of versions, and versions can be retrieved based on\\ntheir version numbers.\\n\\n## PATHS\\n\\nThe following paths are supported by this backend. To view help for\\nany of the paths below, use the help command with any route matching\\nthe path pattern. Note that depending on the policy of your auth token,\\nyou may or may not be able to access certain paths.\\n\\n ^.$\\n\\n\\n ^config$\\n Configures settings for the KV store\\n\\n ^data/(?P<path>.)$\\n Write, Read, and Delete data in the Key-Value Store.\\n\\n ^delete/(?P<path>.)$\\n Marks one or more versions as deleted in the KV store.\\n\\n ^destroy/(?P<path>.)$\\n Permanently removes one or more versions in the KV store\\n\\n ^metadata/(?P<path>.)$\\n Configures settings for the KV store\\n\\n ^undelete/(?P<path>.)$\\n Undeletes one or more versions from the KV store.\", \"openapi\": { \"openapi\": \"3.0.2\", \"info\": { \"title\": \"HashiCorp Vault API\", \"description\": \"HTTP API that gives you full access to Vault. All API routes are prefixed with `/v1/`.\", \"version\": \"1.0.0\", \"license\": { \"name\": \"Mozilla Public License 2.0\", \"url\": \"https://www.mozilla.org/en-US/MPL/2.0\" } }, \"paths\": { \"/.*\": {}, \"/config\": { \"description\": \"Configures settings for the KV store\", \"x-vault-create-supported\": true, \"get\": { \"summary\": \"Read the backend level settings.\", \"tags\": [ \"secrets\" ], \"responses\": { \"200\": { \"description\": \"OK\" } } }, ...[output truncated]... } }}``` A common JSON structure is always returned to return errors: ``` { \"errors\": [ \"message\", \"another message\" ]}``` This structure will be returned for any HTTP status greater than or equal to 400. The following HTTP status codes are used throughout the API. Vault tries to adhere to these whenever possible, but in case it doesn't -- then feel free to raise a bug for our attention! Note: Applications should be prepared to accept both 200 and 204 as success. 204 is simply an indication that there is no response body to parse, but API endpoints that indicate that they return a 204 may return a 200 if warnings are generated during the operation. A maximum request size of 32MB is imposed to prevent a denial of service attack with arbitrarily large requests; this can be tuned per listener block in Vault's server configuration file. On this page:" } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "Alcide", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "Vault", "subcategory": "Key Management" }
[ { "data": "Vault is an identity-based secret and encryption management system. This documentation covers the main concepts of Vault, what problems it can solve, and contains a quick start for using Vault. Centrally store, access, and deploy secrets across applications, systems, and infrastructure. Securely handle data such as social security numbers, credit card numbers, and other types of compliance-regulated information. Use a standardized workflow for distribution and lifecycle management of cryptographic keys in various KMS providers. On this page:" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Alcide", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "ARMO", "subcategory": "Security & Compliance" }
[ { "data": "ARMO Platform is a SaaS solution for Kubernetes and CI/CD security that is powered by Kubescape. You can use ARMO Platform to harden your Kubernetes clusters, secure your CI/CD pipelines, understand your RBAC status, or pass your Kubernetes security audits. To sign up for ARMO Platform, sign up for an ARMO Platform account. The signup process guides you through connecting your cluster to ARMO Platform and your first scan. ARMO Platform uses the open source project Kubescape to scan your Kubernetes clusters, registries, and code repositories for vulnerabilities and misconfigurations. ARMO initially developed the open source project and continues to contribute to it. When running in-cluster, ARMO Platform provides a code snippet to deploy Kubescape as a microservice using a helm chart. The Kubescape microservice scans the cluster periodically. Misconfiguration information is pulled from ARMOs regolibrary, while vulnerability information is pulled from Kubevuln. By default, the microservice scans the host node for to give more context to scans and includes this data when the scans are sent. The scans are aggregated and stored in ARMO Platform, where you can use our toolset to view any identified issue and potential fixes or remediation steps. For more information about Kubescape, view the Kubescape architecture documentation. ARMO Platform and the Kubescape microservice communicate using gateways over HTTPS. Scan data is sent over HTTPS to an endpoint on the ARMO Platform. Scan data sent to the ARMO Platform is saved for one month for a free user and three months for a paid user before being deleted. Updated 27 days ago" } ]
{ "category": "Provisioning", "file_name": "verifying-or-approving-a-domain-for-your-organization.md", "project_name": "Alcide", "subcategory": "Security & Compliance" }
[ { "data": "You can verify your ownership of domains with GitHub to confirm your organization's identity. Organization owners can verify or approve a domain for an organization. After verifying ownership of your organization's domains, a \"Verified\" badge will display on the organization's profile. To display a \"Verified\" badge, the website and email information shown on an organization's profile must match the verified domain or domains. If the website and email address shown on your organization's profile are hosted on different domains, you must verify both domains. If the website and email address use variants of the same domain, you must verify both variants. For example, if the profile shows the website www.example.com and the email address info@example.com, you would need to verify both www.example.com and example.com. If you confirm your organizations identity by verifying your domain and restricting email notifications to only verified email domains, you can help prevent sensitive information from being exposed. For more information see \"Best practices for preventing data leaks in your organization.\" To verify a domain, you must have access to modify domain records with your domain hosting service. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. Next to \"Verified & approved domains for your enterprise account\", click Add a domain. Under \"What domain would you like to add?\", type the domain you'd like to verify, then click Add domain. Follow the instructions under \"Add a DNS TXT record\" to create a DNS TXT record with your domain hosting service. Wait for your DNS configuration to change, which may take up to 72 hours. You can confirm your DNS configuration has changed by running the dig command on the command line, replacing TXT-RECORD-NAME with the name of the TXT record created in your DNS configuration. You should see your new TXT record listed in the command output. ``` dig TXT-RECORD-NAME +nostats +nocomments +nocmd TXT ``` After confirming your TXT record is added to your DNS, follow steps one through three above to navigate to your organization's approved and verified domains. To the right of the domain that's pending verification, select the dropdown menu, then click Continue verifying. Click Verify. Optionally, once the \"Verified\" badge is visible on your organization's profile page, you can delete the TXT entry from the DNS record at your domain hosting service. Note: The ability to approve a domain not owned by your organization or enterprise is currently in beta and subject to change. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. Next to \"Verified & approved domains for your enterprise account\", click Add a domain. Under \"What domain would you like to add?\", type the domain you'd like to verify, then click Add domain. To the right of \"Can't verify this domain?\", click Approve it instead. Read the information about domain approval, then click Approve DOMAIN. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. To the right of the domain to remove, select the dropdown menu, then click Delete. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "cert-manager", "subcategory": "Security & Compliance" }
[ { "data": "Learn how to deploy cert-manager and how to configure it to get certificates for the NGINX Ingress controller from Let's Encrypt. Learn how to deploy cert-manager on Google Kubernetes Engine and how to configure it to get certificates for Ingress, from Let's Encrypt. Learn how to deploy cert-manager on Azure Kubernetes Service (AKS) and how to configure it to get certificates for an HTTPS web server, from Let's Encrypt. 2024 The cert-manager Authors. 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": "docs.cerbos.dev.md", "project_name": "Cerbos", "subcategory": "Security & Compliance" }
[ { "data": "Cerbos helps you super-charge your authorization implementation by writing context-aware access control policies for your application resources. Author access rules using an intuitive YAML configuration language, use your Git-ops infrastructure to test and deploy them and, make simple API requests to the Cerbos PDP to evaluate the policies and make dynamic access decisions. Instantly update your access policies without re-compiling or re-deploying your application. Let your product owner tweak access policies on their own while you focus on more interesting work. The traditional practice of weaving authorization logic into application code effectively obscures the logic and complicates the source code. Documentation is notoriously difficult to keep up-to-date as the system evolves inevitably requiring a code spelunking session to answer questions or update the documentation. This is often tedious, error-prone and requires valuable developer time. The simple policy-as-configuration approach provided by Cerbos helps even non-developers easily understand the authorization logic of the system. Best of all, it is always guaranteed to be up-to-date. In modern microservice environments it is quite common to share some resources between different services developed by different teams (e.g. a bank account in a banking system). These services could even be developed using different programming languages. Cerbos provides a language-agnostic API to share common access control policies between these disparate services ensuring instant consistency without the need to coordinate development and deployment efforts across many teams. Cerbos provides advanced tooling to lint, compile and test policies. Native GitOps support is built in. Use the same development best practices you use day-to-day to develop and deploy authorization logic. The textual policy language of Cerbos makes it ideal for storing policies on version control systems. Follow the evolution of access rules through time and pinpoint exactly when changes were made, why, and by whom. Cerbos Policy Decision Point (PDP) is built for modern, containerised microservice environments with support for both x86-64 and ARM64 architectures, comprehensive observability integrations (metrics, distributed tracing), REST and gRPC endpoints, and native GitOps support (CI tooling, push-to-deploy). Author Cerbos policies to define access rules for your" }, { "data": "Optionally, write unit tests for the policies using the Cerbos DSL. Compile the policies and run tests using the Cerbos CLI. Follow your standard development process to push the changes to production. (E.g. create pull request, run CI tests, get approval and merge to prod branch) Cerbos will automatically pull the latest commits from the production branch and update the policies in place without requiring a restart. Your changes are now rolled out! Cerbos is designed to be deployed as a service rather than a library compiled into an application. This design choice provides several benefits: Permission checks can be performed by any part of the application stack and even shared between multiple services regardless of the programming language, CPU architecture, operating system or deployment model. Policy updates instantly take effect without having to recompile or redeploy the applications. This reduces disruption to busy services and enables policy authors to iterate quickly and respond to events faster. With modern network stacks, the communication overhead is effectively negligible in all but the most extreme cases. Even in those exceptional cases, scaling Cerbos to handle the demand is extremely easy due to its lightweight, stateless design. All development and optimization efforts to Cerbos can be concentrated on a single project because we do not need to replicate the effort on multiple language-specific implementations. All our users, regardless of their programming language of choice, immediately get the benefit of the latest and greatest Cerbos features as soon they are released. The Cerbos approach is a proven, modern, cloud native pattern for delivering language-agnostic infrastructure services. Microsoft Dapr, Istio and Linkerd are good examples of popular products utilising similar language-agnostic service APIs to augment applications. Because Cerbos is in the critical request path and expected to handle large volumes of requests, we are obsessive about making Cerbos as fast and as efficient as possible with every release. Cerbos exposes an efficient, low latency gRPC API and is designed to be stateless and lightweight so that it can be deployed as a sidecar right next to your application. It can even be accessed over Unix domain sockets for extra security and reduced overhead." } ]
{ "category": "Provisioning", "file_name": "quickstart.md", "project_name": "Cerbos", "subcategory": "Security & Compliance" }
[ { "data": "Create a directory to store the policies. ``` mkdir -p cerbos-quickstart/policies``` Now start the Cerbos server. We are using the container image in this guide but you can follow along using the binary as well. See installation instructions for more information. ``` docker run --rm --name cerbos -d -v $(pwd)/cerbos-quickstart/policies:/policies -p 3592:3592 -p 3593:3593 ghcr.io/cerbos/cerbos:0.36.0``` Time to try out a simple request. | 0 | 1 | |-:|:--| | nan | If you prefer to use Postman, Insomnia or any other software that supports OpenAPI, you can follow this guide along on those tools by downloading the OpenAPI definitions from http://localhost:3592/schema/swagger.json. You can also use the built-in API browser by pointing your browser to http://localhost:3592. | cURL .NET Go Java JS PHP Python Ruby Rust ``` cat <<EOF | curl --silent \"http://localhost:3592/api/check/resources?pretty\" -d @- { \"requestId\": \"quickstart\", \"principal\": { \"id\": \"bugs_bunny\", \"roles\": [ \"user\" ], \"attr\": { \"beta_tester\": true } }, \"resources\": [ { \"actions\": [ \"view:public\", \"comment\" ], \"resource\": { \"kind\": \"album:object\", \"id\": \"BUGS001\", \"attr\": { \"owner\": \"bugs_bunny\", \"public\": false, \"flagged\": false } } }, { \"actions\": [ \"view:public\", \"comment\" ], \"resource\": { \"kind\": \"album:object\", \"id\": \"DAFFY002\", \"attr\": { \"owner\": \"daffy_duck\", \"public\": true, \"flagged\": false } } } ] } EOF``` ``` using Cerbos.Api.V1.Effect; using Cerbos.Sdk.Response; using Cerbos.Sdk.Builder; using Cerbos.Sdk.Utility; internal class Program { private static void Main(string[] args) { var client = CerbosClientBuilder.ForTarget(\"http://localhost:3593\").WithPlaintext().Build(); var request = CheckResourcesRequest .NewInstance() .WithRequestId(RequestId.Generate()) .WithIncludeMeta(true) .WithPrincipal( Principal .NewInstance(\"bugs_bunny\", \"user\") .WithAttribute(\"beta_tester\", AttributeValue.BoolValue(true)) ) .WithResourceEntries( ResourceEntry .NewInstance(\"album:object\", \"BUGS001\") .WithAttribute(\"owner\", AttributeValue.StringValue(\"bugs_bunny\")) .WithAttribute(\"public\", AttributeValue.BoolValue(false)) .WithAttribute(\"flagged\", AttributeValue.BoolValue(false)) .WithActions(\"comment\", \"view:public\"), ResourceEntry .NewInstance(\"album:object\", \"DAFFY002\") .WithAttribute(\"owner\", AttributeValue.StringValue(\"daffy_duck\")) .WithAttribute(\"public\", AttributeValue.BoolValue(true)) .WithAttribute(\"flagged\", AttributeValue.BoolValue(false)) .WithActions(\"comment\", \"view:public\") ); CheckResourcesResponse result = client.CheckResources(request); foreach (var resourceId in new[] { \"BUGS001\", \"DAFFY002\" }) { var resultEntry = result.Find(resourceId); Console.Write($\"\\nResource ID: {resourceId}\\n\"); foreach (var actionEffect in resultEntry.Actions) { string action = actionEffect.Key; Effect effect = actionEffect.Value; Console.Write($\"\\t{action} -> {(effect == Effect.Allow ? \"EFFECTALLOW\" : \"EFFECTDENY\")}\\n\"); } } } }``` ``` package main import ( \"context\" \"log\" \"github.com/cerbos/cerbos-sdk-go/cerbos\" ) func main() { c, err := cerbos.New(\"localhost:3593\", cerbos.WithPlaintext()) if err != nil { log.Fatalf(\"Failed to create client: %v\", err) } principal := cerbos.NewPrincipal(\"bugs_bunny\", \"user\") principal.WithAttr(\"beta_tester\", true) kind := \"album:object\" actions := []string{\"view:public\", \"comment\"} r1 := cerbos.NewResource(kind, \"BUGS001\") r1.WithAttributes(map[string]any{ \"owner\": \"bugs_bunny\", \"public\": false, \"flagged\": false, }) r2 := cerbos.NewResource(kind, \"DAFFY002\") r2.WithAttributes(map[string]any{ \"owner\": \"daffy_duck\", \"public\": true, \"flagged\": false, }) batch := cerbos.NewResourceBatch() batch.Add(r1, actions...) batch.Add(r2, actions...) resp, err := c.CheckResources(context.Background(), principal, batch) if err != nil { log.Fatalf(\"Failed to check resources: %v\", err) } log.Printf(\"%v\", resp) }``` ``` package demo; import static dev.cerbos.sdk.builders.AttributeValue.boolValue; import static dev.cerbos.sdk.builders.AttributeValue.stringValue; import java.util.Map; import dev.cerbos.sdk.CerbosBlockingClient; import dev.cerbos.sdk.CerbosClientBuilder; import dev.cerbos.sdk.CheckResult; import dev.cerbos.sdk.builders.Principal; import dev.cerbos.sdk.builders.ResourceAction; public class App { public static void main(String[] args) throws CerbosClientBuilder.InvalidClientConfigurationException { CerbosBlockingClient client=new CerbosClientBuilder(\"localhost:3593\").withPlaintext().buildBlockingClient(); for (String n : new String[]{\"BUGS001\", \"DAFFY002\"}) { CheckResult cr = client.batch( Principal.newInstance(\"bugs_bunny\", \"user\") .withAttribute(\"beta_tester\", boolValue(true)) ) .addResources( ResourceAction.newInstance(\"album:object\",\"BUGS001\") .withAttributes( Map.of( \"owner\", stringValue(\"bugs_bunny\"), \"public\", boolValue(false), \"flagged\", boolValue(false) ) ) .withActions(\"view:public\", \"comment\"), ResourceAction.newInstance(\"album:object\",\"DAFFY002\") .withAttributes( Map.of( \"owner\", stringValue(\"daffy_duck\"), \"public\", boolValue(true), \"flagged\", boolValue(false) ) ) .withActions(\"view:public\", \"comment\") ) .check().find(n).orElse(null); if (cr != null) { System.out.printf(\"\\nResource: %s\\n\", n); cr.getAll().forEach((action, allowed) -> {" }, { "data": "-> %s\\n\", action, allowed ? \"EFFECTALLOW\" : \"EFFECTDENY\"); }); } } } }``` ``` const { GRPC: Cerbos } = require(\"@cerbos/grpc\"); const cerbos = new Cerbos(\"localhost:3593\", { tls: false }); (async() => { const kind = \"album:object\"; const actions = [\"view:public\", \"comment\"]; const cerbosPayload = { principal: { id: \"bugs_bunny\", roles: [\"user\"], attributes: { beta_tester: true, }, }, resources: [ { resource: { kind: kind, id: \"BUGS001\", attributes: { owner: \"bugs_bunny\", public: false, flagged: false, }, }, actions: actions, }, { resource: { kind: kind, id: \"DAFFY002\", attributes: { owner: \"daffy_duck\", public: true, flagged: false, }, }, actions: actions, }, ], }; const decision = await cerbos.checkResources(cerbosPayload); console.log(decision.results) })();``` ``` <?php require DIR . '/vendor/autoload.php'; use Cerbos\\Effect\\V1\\Effect; use Cerbos\\Sdk\\Builder\\AttributeValue; use Cerbos\\Sdk\\Builder\\CerbosClientBuilder; use Cerbos\\Sdk\\Builder\\CheckResourcesRequest; use Cerbos\\Sdk\\Builder\\Principal; use Cerbos\\Sdk\\Builder\\ResourceEntry; use Cerbos\\Sdk\\Utility\\RequestId; $client = CerbosClientBuilder::newInstance(\"localhost:3593\") ->withPlaintext(true) ->build(); $request = CheckResourcesRequest::newInstance() ->withRequestId(RequestId::generate()) ->withPrincipal( Principal::newInstance(\"bugs_bunny\") ->withRole(\"user\") ->withAttribute(\"beta_tester\", AttributeValue::boolValue(true)) ) ->withResourceEntries( [ ResourceEntry::newInstance(\"album:object\", \"BUGS001\") ->withAttribute(\"owner\", AttributeValue::stringValue(\"bugs_bunny\")) ->withAttribute(\"public\", AttributeValue::boolValue(false)) ->withAttribute(\"flagged\", AttributeValue::boolValue(false)) ->withActions([\"comment\", \"view:public\"]), ResourceEntry::newInstance(\"album:object\", \"DAFFY002\") ->withAttribute(\"owner\", AttributeValue::stringValue(\"daffy_duck\")) ->withAttribute(\"public\", AttributeValue::boolValue(true)) ->withAttribute(\"flagged\", AttributeValue::boolValue(false)) ->withActions([\"comment\", \"view:public\"]) ] ); $checkResourcesResponse = $client->checkResources($request); foreach ([\"BUGS001\", \"DAFFY002\"] as $resourceId) { $resultEntry = $checkResourcesResponse->find($resourceId); $actions = $resultEntry->getActions(); foreach ($actions as $k => $v) { printf(\"%s -> %s\", $k, Effect::name($v)); } } ?>``` ``` import json from cerbos.sdk.client import CerbosClient from cerbos.sdk.model import Principal, Resource, ResourceAction, ResourceList from fastapi import HTTPException, status principal = Principal( \"bugs_bunny\", roles=[\"user\"], attr={ \"beta_tester\": True, }, ) actions = [\"view:public\", \"comment\"] resource_list = ResourceList( resources=[ ResourceAction( Resource( \"BUGS001\", \"album:object\", attr={ \"owner\": \"bugs_bunny\", \"public\": False, \"flagged\": False, }, ), actions=actions, ), ResourceAction( Resource( \"DAFFY002\", \"album:object\", attr={ \"owner\": \"daffy_duck\", \"public\": True, \"flagged\": False, }, ), actions=actions, ), ], ) with CerbosClient(host=\"http://localhost:3592\") as c: try: resp = c.checkresources(principal=principal, resources=resourcelist) resp.raiseiffailed() except Exception: raise HTTPException( statuscode=status.HTTP403_FORBIDDEN, detail=\"Unauthorized\" ) print(json.dumps(resp.todict(), sortkeys=False, indent=4))``` ``` require 'cerbos' require 'json' client = Cerbos::Client.new(\"localhost:3593\", tls: false) kind = \"album:object\" actions = [\"view:public\", \"comment\"] r1 = { :kind => kind, :id => \"BUGS001\", :attributes => { :owner => \"bugs_bunny\", :public => false, :flagged => false, } } r2 = { :kind => kind, :id => \"DAFFY002\", :attributes => { :owner => \"daffy_duck\", :public => true, :flagged => false, } } decision = client.check_resources( principal: { id: \"bugs_bunny\", roles: [\"user\"], attributes: { beta_tester: true, }, }, resources: [ { resource: r1, actions: actions }, { resource: r2, actions: actions }, ], ) res = { :results => [ { :resource => r1, :actions => { :comment => decision.allow?(resource: r1, action: \"comment\"), :\"view:public\" => decision.allow?(resource: r1, action: \"view:public\"), }, }, { :resource => r2, :actions => { :comment => decision.allow?(resource: r2, action: \"comment\"), :\"view:public\" => decision.allow?(resource: r2, action: \"view:public\"), }, }, ], } puts JSON.pretty_generate(res)``` ``` use cerbos::sdk::attr::attr; use cerbos::sdk::model::{Principal, Resource, ResourceAction, ResourceList}; use cerbos::sdk::{CerbosAsyncClient, CerbosClientOptions, CerbosEndpoint, Result}; async fn main() -> Result<()> { let opt = CerbosClientOptions::new(CerbosEndpoint::HostPort(\"localhost\", 3593)).with_plaintext(); let mut client = CerbosAsyncClient::new(opt).await?; let principal = Principal::new(\"bugsbunny\", [\"user\"]).withattributes([attr(\"beta_tester\", true)]); let actions: [&str; 2] = [\"view:public\", \"comment\"]; let resp = client .check_resources( principal, ResourceList::new_from([ ResourceAction( Resource::new(\"BUGS001\", \"album:object\").with_attributes([ attr(\"owner\", \"bugs_bunny\"), attr(\"public\", false), attr(\"flagged\", false), ]), actions, ), ResourceAction( Resource::new(\"DAFFY002\", \"album:object\").with_attributes([ attr(\"owner\", \"daffy_duck\"), attr(\"public\", true), attr(\"flagged\", false), ]), actions, ), ]), None, ) .await?; println!(\"{:?}\", resp.response); Ok(()) }``` In this example, the bugsbunny principal is trying to perform two actions (view:public and comment) on two album:object resources. The resource instance with the ID BUGS001 belongs to bugsbunny and is private (public attribute is false). The other resource instance with the ID DAFFY002 belongs to daffy_duck and is" }, { "data": "This is the response from the server: ``` { \"requestId\": \"quickstart\", \"results\": [ { \"resource\": { \"id\": \"BUGS001\", \"kind\": \"album:object\" }, \"actions\": { \"comment\": \"EFFECT_DENY\", \"view:public\": \"EFFECT_DENY\" } }, { \"resource\": { \"id\": \"DAFFY002\", \"kind\": \"album:object\" }, \"actions\": { \"comment\": \"EFFECT_DENY\", \"view:public\": \"EFFECT_DENY\" } } ] }``` Bugs Bunny is not allowed to view or comment on any of the album resources even the ones that belong to him. This is because currently there are no policies defined for the album:object resource. Now create a derived roles definition that assigns the owner dynamic role to a user if the owner attribute of the resource theyre trying to access is equal to their ID. ``` cat > cerbos-quickstart/policies/derivedrolescommon.yaml <<EOF apiVersion: \"api.cerbos.dev/v1\" derivedRoles: name: common_roles definitions: name: owner parentRoles: [\"user\"] condition: match: expr: request.resource.attr.owner == request.principal.id EOF``` Also create a resource policy that gives owners full access to their own albums. ``` cat > cerbos-quickstart/policies/resource_album.yaml <<EOF apiVersion: api.cerbos.dev/v1 resourcePolicy: version: \"default\" importDerivedRoles: common_roles resource: \"album:object\" rules: actions: ['*'] effect: EFFECT_ALLOW derivedRoles: owner EOF``` Try the request again. This time bugsbunny should be allowed access to his own album but denied access to the album owned by daffyduck. ``` cat <<EOF | curl --silent \"http://localhost:3592/api/check/resources?pretty\" -d @- { \"requestId\": \"quickstart\", \"principal\": { \"id\": \"bugs_bunny\", \"roles\": [ \"user\" ], \"attr\": { \"beta_tester\": true } }, \"resources\": [ { \"actions\": [ \"view:public\", \"comment\" ], \"resource\": { \"kind\": \"album:object\", \"id\": \"BUGS001\", \"attr\": { \"owner\": \"bugs_bunny\", \"public\": false, \"flagged\": false } } }, { \"actions\": [ \"view:public\", \"comment\" ], \"resource\": { \"kind\": \"album:object\", \"id\": \"DAFFY002\", \"attr\": { \"owner\": \"daffy_duck\", \"public\": true, \"flagged\": false } } } ] } EOF``` ``` { \"requestId\": \"quickstart\", \"results\": [ { \"resource\": { \"id\": \"BUGS001\", \"kind\": \"album:object\" }, \"actions\": { \"comment\": \"EFFECT_ALLOW\", \"view:public\": \"EFFECT_ALLOW\" } }, { \"resource\": { \"id\": \"DAFFY002\", \"kind\": \"album:object\" }, \"actions\": { \"comment\": \"EFFECT_DENY\", \"view:public\": \"EFFECT_DENY\" } } ] }``` Now add a rule to the policy to allow users to view public albums. ``` cat > cerbos-quickstart/policies/resource_album.yaml <<EOF apiVersion: api.cerbos.dev/v1 resourcePolicy: version: \"default\" importDerivedRoles: common_roles resource: \"album:object\" rules: actions: ['*'] effect: EFFECT_ALLOW derivedRoles: owner actions: ['view:public'] effect: EFFECT_ALLOW roles: user condition: match: expr: request.resource.attr.public == true EOF``` If you try the request again, bugsbunny now has view:public access to the album owned by daffyduck but not comment access. Can you figure out how to update the policy to give him comment access as well? ``` cat <<EOF | curl --silent \"http://localhost:3592/api/check/resources?pretty\" -d @- { \"requestId\": \"quickstart\", \"principal\": { \"id\": \"bugs_bunny\", \"roles\": [ \"user\" ], \"attr\": { \"beta_tester\": true } }, \"resources\": [ { \"actions\": [ \"view:public\", \"comment\" ], \"resource\": { \"kind\": \"album:object\", \"id\": \"BUGS001\", \"attr\": { \"owner\": \"bugs_bunny\", \"public\": false, \"flagged\": false } } }, { \"actions\": [ \"view:public\", \"comment\" ], \"resource\": { \"kind\": \"album:object\", \"id\": \"DAFFY002\", \"attr\": { \"owner\": \"daffy_duck\", \"public\": true, \"flagged\": false } } } ] } EOF``` ``` { \"requestId\": \"quickstart\", \"results\": [ { \"resource\": { \"id\": \"BUGS001\", \"kind\": \"album:object\" }, \"actions\": { \"comment\": \"EFFECT_ALLOW\", \"view:public\": \"EFFECT_ALLOW\" } }, { \"resource\": { \"id\": \"DAFFY002\", \"kind\": \"album:object\" }, \"actions\": { \"comment\": \"EFFECT_DENY\", \"view:public\": \"EFFECT_ALLOW\" } } ] }``` Once you are done experimenting, the Cerbos server can be stopped with the following command: ``` docker kill cerbos``` Explore the demo apps built with Cerbos Read more about Cerbos policies Join the Cerbos community on Slack Ask us anything via help@cerbos.dev Visit the Cerbos website" } ]
{ "category": "Provisioning", "file_name": "inspec.md", "project_name": "Chef InSpec", "subcategory": "Security & Compliance" }
[ { "data": "Chef InSpec is an open-source framework for testing and auditing your applications and infrastructure. It compares the actual state of your system with the desired state that you express in easy-to-read and easy-to-write Chef InSpec code. It detects violations and displays findings in the form of a report, but puts you in control of remediation. Chef InSpec is a run-time framework and rule language used to specify compliance, security, and policy requirements. It includes a collection of resources that help you write auditing controls quickly and easily. Chef InSpec uses profiles to audit infrastructure. An InSpec profile organizes multiple controls into a reusable artifact. You can describe your profiles with metadata, version them, pin them to specific versions of InSpec, define specific platforms that a profile can test, and define profile dependencies. A control defines a regulatory recommendation or requirement for the state of a system. Each profile can have many controls and each control audits different aspects of a system. Chef InSpec resources allow you to test specific parts of your infrastructure. Chef InSpec has 1188 resources ready to usefrom Apache2 to ZFS pool. This includes resources for testing AWS, Azure, AliCloud, and GCP cloud infrastructure, and you can create your own custom resources if we dont have a resource that meets your needs. InSpec reporters format and deliver the results of an InSpec audit run. You can output results to the standard output; to text formats like JSON, HTML, or plain text; or send the results directly to Chef Automate. Run your tests wherever your infrastructure islocally or in the cloud. Chef InSpec is designed for platforms and treats operating systems as special cases. Chef InSpec helps you, whether you use Windows Server on your own hardware or run Linux in Docker containers in the cloud. As for the cloud, you can use Chef InSpec to target applications and services running on Alibaba, AWS, Azure, and GCP. The InSpec community created several open-source profiles that are free to use. Use the inspec supermarket profiles command to list the available profiles, or view them in Chef Supermarket. This includes the DevSec Hardening Framework, a set of server hardening profiles. Chef offers premium CIS- and STIG-based profiles for compliance scanning across a range of enterprise assets. Was this page helpful? Help us improve this document. Still stuck? How can we improve this document? Thank you for your feedback! Page Last Modified: November 16, 2023 Copyright 2024 Progress Software Corporation and/or its subsidiaries or affiliates. All Rights Reserved." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Confidential Containers", "subcategory": "Security & Compliance" }
[ { "data": "The documentation for each sub-project of the Confidential Containers project is available in the respective tabs, checkout the links below. These slides provide a high-level overview of all the subprojects in the Confidential Containers. High level overview of Confidential Containers Demo you can reproduce to try Confidential Containers Depiction of typical Confidential Container use cases and how they can be addressed using the projects tools. Documentation for Kata Containers pertaining to Confidential Containers Trusted Components for Attestation and Secret Management Confidential Container Tools and Components Documentation for Cloud API Adaptor a.k.a Peer Pods About Confidential Containers" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "Copa", "subcategory": "Security & Compliance" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "en.md", "project_name": "Clair", "subcategory": "Security & Compliance" }
[ { "data": "Featured links Learn how to use RedHat products, find answers, and troubleshoot problems. Support application deploymentsfrom on premise to the cloud to the edgein a flexible operating environment. Quickly build and deploy applications at scale, while you modernize the ones you already have. Create, manage, and dynamically scale automation across your entire enterprise. Deploying and managing customized RHEL system images in hybrid clouds Install, configure and customize RedHat Developer Hub Setting up clusters and accounts Creating Ansible playbooks RedHat Ansible Lightspeed with IBM watsonx Code Assistant basics Navigating features and services Get answers quickly by opening a support case, directly access our support engineers during weekday business hours via live chat, or speak directly with a RedHat support expert by phone. Whether youre a beginner or an expert with RedHat Cloud Services products and solutions, these learning resources can help you build whatever your organization needs. Explore resources and tools that help you build, deliver, and manage innovative cloud-native apps and services. We help RedHat users innovate and achieve their goals with our products and services with content they can trust. RedHat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the RedHat Blog. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge." } ]
{ "category": "Provisioning", "file_name": "github-privacy-statement.md", "project_name": "Copa", "subcategory": "Security & Compliance" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "github-terms-of-service.md", "project_name": "Copa", "subcategory": "Security & Compliance" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Copa", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "docker-compose.md", "project_name": "Curiefense", "subcategory": "Security & Compliance" }
[ { "data": "This page describes the tasks necessary to deploy Curiefense using Docker Compose. The tasks are described sequentially below: Clone the Repository TLS Setup Set Deployment Variables Deploy Curiefense Test the Deployment Clean Up During this process, you might find it helpful to read the descriptions (which include the purpose, secrets, and network/port details) of the services and their containers: Services and Container Images If during this process you need to rebuild an image, see the instructions here: Building/Rebuilding an Image. Clone the repository, if you have not already done so: ``` git clone https://github.com/curiefense/curiefense.git``` This documentation assumes it has been cloned to ~/curiefense. A Docker Compose deployment can use TLS for communication with Curiefense's UI server and also for the protected service, but this is optional. (If you do not choose to set it up, HTTPS will be disabled.) If you do not want Curiefense to use TLS, then skip this step and proceed to the next section. Otherwise, generate the certificate(s) and key(s) now. To enable TLS for the protected site/application, go to curiefense/deploy/compose/curiesecrets/curieproxy_ssl/ and do the following: Edit site.crt and add the certificate. Edit site.key and add the key. To enable TLS for the nginx server that is used by uiserver, go to curiefense/deploy/compose/curiesecrets/uiserver_ssl/and do the following: Edit ui.crt and add the certificate. Edit ui.key and add the key. Docker Compose deployments can be configured in two ways: By setting values for variables in deploy/compose/.env Or by setting OS environment variables (which will override any variables set in.env) These variables are described below. Curiefense uses the storage defined here for synchronizing configuration changes between confserver and the Curiefense sidecars. By default, this points to the local_bucket Docker volume: ``` $ grep CURIEBUCKETLINK .env CURIEBUCKETLINK=file:///bucket/prod/manifest.json``` For multi-node deployments, or to use S3 for a single node, replace this value with the URL of an S3 bucket: ``` CURIEBUCKETLINK=s3:///BUCKETNAME/prod/manifest.json``` In that case, you will need to supply AWS credentials in deploy/compose/curiesecrets/s3cfg, following this template: ``` [default] access_key = AAAAAAAAAAAAAAAAAAAA secret_key = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA``` The address of the destination service for which Curiefense acts as a reverse proxy. By default, this points to the echo container, which simply echoes the HTTP requests it receives. Defaults to main (the latest stable image, automatically built from the main branch). To run a version that matches the contents of your working directory, use the following command: ``` DOCKER_TAG=\"$(git describe --tag --long --dirty)-$(git rev-parse --short=12 HEAD:curiefense)\"``` Once the tasks above are completed, run these commands: ``` cd curiefense/deploy/compose/ docker-compose up``` After deployment, the Echo service should be running and protected behind Curiefense. You can test the success of the deployment by querying it: ``` $ curl http://localhost:30081/ Request served by echo HTTP/1.1 GET / Host: localhost:30081 X-Envoy-Internal: true X-Request-Id: 57dd8be5-6040-491a-903e-7ef3734ab9db X-Envoy-Expected-Rq-Timeout-Ms: 15000 User-Agent: curl/7.74.0 Accept: / X-Forwarded-For: 172.18.0.1 X-Forwarded-Proto: http``` Also verify the following: The UIServer is now available at http://localhost:30080 Grafana is now available at http://localhost:30300 The confserver is now available at http://localhost:30000/api/v1/ To stop all containers and remove any persistent data stored in volumes, run the following commands: ``` docker-compose rm -f && docker volume prune -f``` Last updated 2 years ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "istio-via-helm.md", "project_name": "Curiefense", "subcategory": "Security & Compliance" }
[ { "data": "The instructions below show how to install Curiefense on a Kubernetes cluster, embedded in an Istio service mesh. The following tasks, each described below in sequence, should be performed: Clone the Helm Repository Create a Kubernetes Cluster Running Helm Reset State Create Namespaces Setup storage Setup Secrets Setup TLS Deploy Istio and Curiefense Images Deploy the (Sample) App Expose Curiefense Services Using NodePorts Access Curiefense Services At the bottom of this page is a Reference section describing the charts and configuration variables. During this process, you might find it helpful to read the descriptions (which include the purpose, secrets, and network/port details) of the services and their containers: Services and Container Images Clone the repository, if you have not already done so: ``` git clone https://github.com/curiefense/curiefense-helm.git``` This documentation assumes it has been cloned to ~/curiefense-helm. Access to a Kubernetes cluster is required. Dynamic provisioning of persistent volumes must be supported. To set a StorageClass other than the default, change or override variable storageclassname in ~/curiefense-helm/curiefense-helm/curiefense/values.yaml. Below are instructions for several ways to achieve this: Using minikube, Kubernetes 1.23.3 Using Google GKE, Kubernetes 1.23 Using Amazon EKS, Kubernetes 1.23 You will need to install the following clients: Install kubectl (https://kubernetes.io/docs/tasks/tools/install-kubectl/) -- use the same version as your cluster. Install Helm v3 (https://helm.sh/docs/intro/install/) This section describes the install for a single-node test setup (which is generally not useful for production). Starting from a fresh ubuntu 21.04 VM: Install docker (https://docs.docker.com/engine/install/ubuntu/), and allow your user to interact with docker with sudo usermod -aG docker $USER && newgrp docker Install minikube (https://minikube.sigs.k8s.io/docs/start/) ``` minikube start --kubernetes-version=v1.23.3 --driver=docker --memory='8g' --cpus 6 minikube addons enable ingress``` Start a screen or tmux, and keep the following command running: ``` minikube tunnel``` ``` gcloud container clusters create curiefense-gks --num-nodes=1 --machine-type=n1-standard-4 --cluster-version=1.23 --region=us-central1 gcloud container clusters get-credentials curiefense-gks``` Create a cluster ``` eksctl create cluster --name curiefense-eks-2 --version 1.23 --nodes 1 --nodes-max 1 --managed --region us-east-2 --node-type m5.xlarge``` If you have a clean machine where Curiefense has never been installed, skip this step and go to the next. Otherwise, run these commands: ``` helm delete curiefense helm delete -n curiefense curiefense helm delete -n istio-system istio-ingress helm delete -n istio-system istiod helm delete -n istio-system istio-base``` Ensure that helm ls -a --all-namespaces outputs nothing. Run the following commands: ``` kubectl create namespace curiefense kubectl create namespace istio-system``` Curiefense's confserver exports configurations to object storage services, from which they are retrieved by curieproxy. Four backends are currently supported: AWS S3, Google Cloud Storage, minio (which can be self-hosted), or local storage (for single-node test deployments). To use curiefense, you must pick one, and define Secrets that allow interacting with the chosen storage service (except for local storage). Encode the AWS S3 credentials that have r/w access to the S3 bucket. This yields a base64 string: ``` cat << EOF | base64 -w0 [default] access_key = xxxxxxxxxxxxxxxxxxxx secret_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx EOF``` Create a local file called s3cfg.yaml, with the contents below, replacing both occurrences of BASE64_S3CFG with the previously obtained base64 string: ``` apiVersion: v1 kind: Secret data: s3cfg: \"BASE64_S3CFG\" metadata: namespace: curiefense labels: app.kubernetes.io/name: s3cfg name: s3cfg type: Opaque apiVersion: v1 kind: Secret data: s3cfg: \"BASE64_S3CFG\" metadata: namespace: istio-system labels: app.kubernetes.io/name: s3cfg name: s3cfg type: Opaque``` Deploy this secret to the cluster: ``` kubectl apply -f" }, { "data": "Create a bucket, and a service account that has read/write access to the bucket. Obtain a private key for this account, which should look like this: ``` { \"type\": \"service_account\", \"project_id\": \"PROJECT\", \"privatekeyid\": \"1234abcd1234abcd1234abcd1234abcd1234abcd\", \"private_key\": \"--BEGIN PRIVATE KEY--\\nMIIE.....ABCD=\\n--END PRIVATE KEY--\\n\", \"client_email\": \"....@PROJECT.iam.gserviceaccount.com\", \"client_id\": \"123412341234123412341\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"authproviderx509certurl\": \"https://www.googleapis.com/oauth2/v1/certs\", \"clientx509cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/....%40PROJECT.iam.gserviceaccount.com\" }``` Create a local file called gs.yaml, with the contents below, replacing both occurrences of BASE64GSPRIVATE_KEY with the previously obtained base64 string: ``` apiVersion: v1 kind: Secret data: gs.json: \"BASE64GSPRIVATE_KEY\" metadata: labels: app.kubernetes.io/name: gs name: gs namespace: curiefense type: Opaque apiVersion: v1 kind: Secret data: gs.json: \"BASE64GSPRIVATE_KEY\" metadata: labels: app.kubernetes.io/name: gs name: gs namespace: istio-system type: Opaque``` Deploy this secret to the cluster: ``` kubectl apply -f gs.yaml``` Set the curieconfmanifesturl variables in curiefense-helm/curiefense/values.yaml and istio-helm/charts/gateways/istio-ingress/values.yaml to the following URL: gs://BUCKETNAME/prod/manifest.json (replace BUCKETNAME with the actual name of the bucket). Also set the curiefensebuckettype variables in the same values.yaml files to gs. Install a minio server, create a bucket and a Service Account that has read/write permissions to that bucket. The curiefense helm charts may be used to deploy such a minio server (single-node, default credentials, for testing). Encode the minio credentials that have r/w access to the bucket. This yields a base64 string: ``` cat << EOF | base64 -w0 [default] access_key = minioadmin secret_key = minioadmin EOF``` Create a local file called miniocfg.yaml, with the contents below, replacing both occurrences of BASE64_MINIOCFG with the previously obtained base64 string: ``` apiVersion: v1 kind: Secret data: miniocfg: \"BASE64_MINIOCFG\" metadata: labels: app.kubernetes.io/name: miniocfg name: miniocfg namespace: curiefense type: Opaque apiVersion: v1 kind: Secret data: miniocfg: \"BASE64_MINIOCFG\" metadata: labels: app.kubernetes.io/name: miniocfg name: miniocfg namespace: istio-system type: Opaque``` Deploy this secret to the cluster: ``` kubectl apply -f miniocfg.yaml``` An example miniocfg.yaml file is provided in ~/curiefense-helm/curiefense-helm/example-miniocfg.yaml. It contains default credentials for minio, that will work with the minio installation that is provided in the curiefense helm charts. Set the curieconfmanifesturl variables in curiefense-helm/curiefense/values.yaml and istio-helm/charts/gateways/istio-ingress/values.yaml to the following URL: minio://BUCKETNAME/prod/manifest.json (replace BUCKETNAME with the actual name of the bucket; use curiefense-minio-bucket with the minio installation that is provided in the curiefense helm charts). Also set the curiefensebuckettype variables in the same values.yaml files to minio. For clusters where all istio ingress proxies as well as the confserver run on the same kubernetes node (typically test environments), a simple hostPath volume can be used. It is mounted to /bucket on the host machine, as well as in relevant containers. Set the curieconfmanifesturl variables in curiefense-helm/curiefense/values.yaml and istio-helm/charts/gateways/istio-ingress/values.yaml to the following URL: file:///bucket/prod/manifest.json. Also set the curiefensebuckettype variables in the same values.yaml files to local-bucket. Using TLS is optional. Follow these steps if only if you want to use TLS for communicating with the UI server, and you do not rely on istio to manage TLS. The UIServer can be made to be reachable over HTTPS. To do that, two secrets have to be created to hold the TLS certificate and TLS key. Create a local file called uiserver-tls.yaml, replacing TLSCERTBASE64 with the base64-encoded PEM X509 TLS certificate, and TLSKEYBASE64 with the base64-encoded TLS key. ``` apiVersion: v1 data: uisslcrt: TLSCERTBASE64 kind: Secret metadata: labels: app.kubernetes.io/name: uisslcrt name: uisslcrt namespace: curiefense type: Opaque apiVersion: v1 data: uisslkey: TLSKEYBASE64 kind: Secret metadata: labels:" }, { "data": "uisslkey name: uisslkey namespace: curiefense type: Opaque``` Deploy this secret to the cluster: ``` kubectl apply -f uiserver-tls.yaml``` An example file with self-signed certificates is provided at ~/curiefense-helm/curiefense-helm/example-uiserver-tls.yaml. Deploy the Istio service mesh: ``` cd ~/curiefense-helm/istio-helm DOCKER_TAG=main ./deploy.sh``` And then the Curiefense components: ``` cd ~/curiefense-helm/curiefense-helm DOCKER_TAG=main ./deploy.sh``` The application to be protected by Curiefense should now be deployed. These instructions are for the sample application bookinfo which is deployed in the default kubernetes namespace. Installation instructions are summarized below. More detailed instruction are available on the istio website. Add the istio-injection=enabled label that will make Istio automatically inject necessary sidecars to applications that are deployed in the default namespace. ``` kubectl label namespace default istio-injection=enabled``` ``` cd ~ wget 'https://github.com/istio/istio/releases/download/1.16.1/istio-1.16.1-linux-amd64.tar.gz' tar -xf istio-1.16.1-linux-amd64.tar.gz cd ~/istio-1.16.1/ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml``` Check that bookinfo Pods are running (wait a bit if they are not): ``` kubectl get pod -l app=ratings``` Sample output example: ``` NAME READY STATUS RESTARTS AGE ratings-v1-f745cf57b-cjg69 2/2 Running 0 79s``` Check that the application is working by querying its API directly without going through the Istio service mesh: ``` kubectl exec \"$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')\" -c ratings -- curl -sS productpage:9080/productpage | grep -o \"<title>.*</title>\"``` Expected output: ``` <title>Simple Bookstore App</title>``` Set the GATEWAY_URL variable by following instructions on the Istio website. Alternatively, with minikube, this command can be used instead: ``` export GATEWAY_URL=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):80``` Check that bookinfo is reachable through Istio: ``` curl -sS http://$GATEWAY_URL/productpage | grep -o \"<title>.*</title>\"``` Expected output: ``` <title>Simple Bookstore App</title>``` If this error occurs: Could not resolve host: a6fdxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx.us-west-2.elb.amazonaws.com ...the ELB is not ready yet. Wait and retry until it becomes available (typically a few minutes). Run this query to access the protected website, bookinfo, and thus generate an access log entry: ``` curl http://$GATEWAYURL/TESTSTRING``` Run this to ensure that the logs have been emitted: ``` kubectl logs -n istio-system -l app=istio-ingressgateway -c istio-proxy|grep -oE '\"path\":\"/TEST_STRING\"'``` Expected output:. ``` \"path\":\"/TEST_STRING\"``` Run the following commands to expose Curiefense services through NodePorts. Warning: if the machine has a public IP, the services will be exposed on the Internet. Start with this command: ``` kubectl apply -f ~/curiefense-helm/curiefense-helm/expose-services.yaml``` The following command can be used to determine the IP address of your cluster nodes on which services will be exposed: ``` kubectl get nodes -o wide``` If you are using minikube, also run the following commands on the host in order to expose services on the Internet (ex. if you are running this on a cloud VM): ``` sudo iptables -t nat -A PREROUTING -p tcp --match multiport --dports 30000,30080,30300,30443 -j DNAT --to $(minikube ip) sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to $(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') sudo iptables -I FORWARD -p tcp --match multiport --dports 80,30000,30080,30300,30443,30444 -j ACCEPT``` If you are using Amazon EKS, you will also need to allow inbound connections for port range 30000-30500 from your IP. Go to the EC2 page in the AWS console, select the EC2 instance for the cluster (named curiefense-eks-...-Node), select the \"Security\" pane, select the security group (named eks-cluster-sg-curiefense-eks-[0-9]+), then add the incoming rule. Services are now available on the IP address of any of the Kubernetes nodes, through a Node" }, { "data": "For a full list of ports used by Curiefense containers, see the Reference page on services and containers. Helm charts are divided as follows: curiefense-admin - confserver, and UIServer. curiefense-dashboards - Grafana and Prometheus. curiefense-log - elasticsearch, filebeat, fluentd, kibana, logstash. curiefense-proxy - curielogger and redis. Configuration variables in ~/curiefense-helm/curiefense-helm/curiefense/values.yaml can be modified or overridden to fit your deployment needs: Variables in the images section define the Docker image names for each component. Override this if you want to host images on your own private registry. storageclassname is the StorageClass that is used for dynamic provisioning of Persistent Volumes. It defaults to null (default storage class, which works by default on EKS, GKE and minikube). ...storagesize variables define the size of persistent volumes. The defaults are fine for a test or small-scale deployment. curieconfmanifesturl is the URL of the AWS S3 or Google Cloud Storage bucket that is used to synchronize configurations between the confserver and the Curiefense Istio sidecars. dockertag defines the image tag versions that should be used. deploy.sh will override this to deploy a version that matches the current working directory, unless the DOCKERTAG environment variable is set. Components added or modified by Curiefense are defined in ~/curiefense-helm/istio-helm/charts/gateways/istio-ingress/. Compared to the upstream Istio Kubernetes distribution, we add or change the following Pods: An initContainer called curiesync-initialpull has been added. It synchronizes configuration before running Envoy. A container called curiesync has been added. It periodically fetches the configuration that should be applied from an S3 or GS bucket (configurable with the curieconfmanifesturl variable), and makes it available to Envoy. This configuration is used by the LUA code that inspects traffic. The container called istio-proxy now uses our custom Docker image, embedding our HTTP Filter, written in Lua. An EnvoyFilter has been added. It forwards access logs to curielogger (see curiefenseaccesslogs_filter.yaml). An EnvoyFilter has been added. It runs Curiefense's Lua code to inspect incoming traffic on the Ingress Gateways (see curiefenseluafilter.yaml). Configuration variables in ~/curiefense-helm/istio-helm/charts/gateways/istio-ingress/values.yaml can be modified or overridden to fit your deployment needs: gw_image defines the name of the image that contains our filtering code and modified Envoy binary. curiesyncimage defines the name of the image that contains scripts that synchronize local Envoy configuration with the AWS S3 bucket defined in curieconfmanifest_url. curieconfmanifesturl is the URL of the AWS S3 bucket that is used to synchronize configurations between the confserver and the Curiefense Istio sidecars. curiefense_namespace should contain the name of the namespace where Curiefense components defined in ~/curiefense-helm/curiefense-helm/ are running. redis_host defines the hostname of the redis server that will be used by curieproxy. Defaults to the provided redis StatefulSet. Override this to replace the redis instance with one you supply. initialcurieconfpull defines whether a configuration should be pulled from the AWS S3 bucket before running Envoy (true), or if traffic should be allowed to flow with a default configuration until the next synchronization (typically every 10s). Last updated 1 year ago Was this helpful? | Service | Node Port | Notes | |-:|:|--:| | nan | nan | nan | | nan | nan | nan | | nan | nan | nan | | nan | nan | nan | | nan | nan | nan | | nan | nan | nan | Curiefense UI over HTTP 30080 Curiefense UI over HTTPS 30443 Grafana over HTTP 30300 Kibana over HTTP 30601 Configuration API 30000 swagger at http://IP:30000/api/v1 Elasticsearch 30200" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Curiefense", "subcategory": "Security & Compliance" }
[ { "data": "Curiefense is an API-first, DevOps oriented web-defense HTTP-Filter adapter for Envoy and NGINX. It provides multiple security technologies (WAF, application-layer DDoS protection, bot management, and more) along with real-time traffic monitoring and transparency. Curiefense is fully controllable programmatically. All configuration data (security rulesets, policies, etc.) can be maintained singularly, or as different branches for different environments, as you choose. All changes are versioned, and reverts can be done at any time. Curiefense also has a UI console, discussed in this Manual beginning in the Settings section. This documentation is for version 1.5.0. (To view docs for a different version, choose it at the top of the left sidebar.) Curiefense provides traffic filtering that can be configured differently for multiple environments (e.g. dev/qa/prod), all of which can be administered from one central cluster if desired. Here is an overview of its components. In the diagram above, the Server represents a resource protected by Curiefense (a site, app, service, or API). The User is a traffic source attempting to access that resource. Incoming traffic passes through Curiefense. Hostile requests are blocked. The other components in the diagram represent the Curiefense platform, as follows: Curiefense proxy (represented by the column with the Curiefense logo): Integrated with Envoy or NGINX; performs traffic filtering. Elasticsearch stores access logs. Access Logs: Traffic data viewable via Kibana. Metrics. A Prometheus store of traffic metrics. Dashboard. Grafana dashboard(s) with visual displays of traffic metrics. Web Console. Curiefense's web UI for configuring the platform. Config Server: A service which: Receives configuration edits from the Web Console. Receives configuration edits from API calls (not shown in the diagram). Creates a new configuration version in response to edits. Stores the new version in one or more Cloud Storage buckets. Cloud Storage: Stores versioned configurations. Each Curiefense proxy periodically checks Cloud Storage: when a new version is found there, the proxy downloads it and updates its security posture. For detailed information about the specific containers and services which perform the roles described above, see the reference page on Services and Container Images. Curiefense can run in a variety of environments, depending on your specific needs. It can be adapted to many different use cases. Deployment instructions for several different environments are available in the Installation section of this manual and on the Getting Started page. More will be added in the" }, { "data": "If you create an installation workflow for a situation that is not currently described in this manual, please feel free to submit it for inclusion. Conceptually, there are three primary roles performed by Curiefense: Configuration (allowing admins to define security policies, assign them to URLs, etc.) Filtering (applying the defined Configurations to incoming traffic and blocking hostile requests) Monitoring (displaying traffic data in real-time and in historical logs). Each is discussed below. Curiefense maintains its security parameters as Entries, which are contained in Documents, which are contained in Configurations. A Configuration is a complete definition of Curiefense's behavior for a specific environment. An organization can maintain multiple Configurations (e.g., development, staging, and production). Each Configuration contains six Documents (one of each type: ACL Profiles, Rate Limits, etc.) Each Document contains at least one Entry, i.e., an individual security rule or definition. Documents are edited and managed in the Policies & Rules UI or via API. A Configuration also includes data blobs, which currently are used to store the Maxmind geolocation database. This is where Curiefense obtains its geolocation data and ASN for each request it processes. A Configuration is the atomic unit for all of Curiefense's parameters. Any edits to a Configuration result in a new Configuration being committed. Configurations are versioned, and can be reverted at any time. When a Configuration is created or modified (whether by the UI console or an API call), the admin pushes it to a Cloud Storage bucket. An important feature of Curiefense is simultaneous publishing to multiple environments. When a Configuration is published, it can be pushed to multiple buckets (each of which can be monitored by one or more environments) all at once, from a single button-push or API call. Traffic filtering is performed by the Curiefense proxy, as shown in the first diagram above. In other words, this is where the security policies defined in the Configurations are enforced. Some activities (such as rate limiting) require local data storage. Internally, Curiefense uses Redis for this. Other storage methods can be used if desired. Each time a request goes through Curiefense, a detailed log message is pushed to elasticsearch. Traffic data is available in several ways: The Curiefense graphical client provides a Kibana Access Log which provides comprehensive details for requests. Curiefense is also integrated with Grafana and Prometheus, for traffic dashboards and other displays. Last updated 2 years ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Dex", "subcategory": "Security & Compliance" }
[ { "data": "Dex is an identity service that uses OpenID Connect to drive authentication for other apps. Dex acts as a portal to other identity providers through connectors. This lets Dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory. Due to their public nature, GitHub and mailing lists are NOT appropriate places for reporting vulnerabilities. Please refer to the projects security disclosure process when reporting issues that may be security related. First touch with Dex Intro to OpenID Connect (basics) Configuring general settings for Dex Documentation about configuration of Dex connectors Most common scenarios and how to solve them Dev Environment Setup, Testing, and Contributing to Dex The following documents are no longer maintained and are archived for reference purposes only 2024 Dex IdP Contributors 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Falco", "subcategory": "Security & Compliance" }
[ { "data": "Falco alerts can easily be forwarded to third-party systems. Their JSON format allows them to be easily consumed for storage, analysis and reaction. Falcosidekick is a proxy forwarder, it acts as central point for any fleet of Falco instances using their http outputs to send their alerts. The currently available outputs are chat, alert, log, storage, streaming systems, etc. Falcosidekick can also add custom fields to the alerts, filter them by priority and expose a Prometheus metrics endpoint. The full documentation and the project repository are here. Falcosidekick can be deployed with Falco in Kubernetes clusters with the official Falco Helm chart. Its configuration can be made through a yaml file and/or env vars. The available outputs in Falcosidekick are: Chat Metrics / Observability Alerting Logs Object Storage FaaS / Serverless Message queue / Streaming Email Database Web SIEM Workflow Other See the available Helm values to configure Falcosidekick. ``` helm install falco falcosecurity/falco \\ -n falco --create-namespace \\ --set falcosidekick.enabled=true \\ --set tty=true ``` Use the env vars to configure Falcosidekick. ``` docker run -d -p 2801:2801 -e SLACK_WEBHOOKURL=XXXX falcosecurity/falcosidekick:2.27.0 ``` Adapt the version and the architecture to your environment. You can find all the releases here. ``` sudo mkdir -p /etc/falcosidekick wget https://github.com/falcosecurity/falcosidekick/releases/download/2.27.0/falcosidekick2.27.0linuxamd64.tar.gz && sudo tar -C /usr/local/bin/ -xzf falcosidekick2.27.0linuxamd64.tar.gz ``` See the example config file to create your own in /etc/falcosidekick/config.yaml. To enable and start the service, you can use a systemd unit /etc/systemd/system/falcosidekick.service like this one: ``` [Unit] Description=Falcosidekick After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 ExecStart=/usr/local/bin/falcosidekick -c /etc/falcosidekick/config.yaml EOF ``` ``` systemctl enable falcosidekick systemctl start falcosidekick ``` Falcosidekick comes with its own interface to visualize the events and get statistics. You can install the UI at the same moment as Falcosidekick by adding the argument --set falcosidekick.webui.enabled=true. ``` helm install falco falcosecurity/falco \\ -n falco --create-namespace \\ --set falcosidekick.enabled=true \\ --set falcosidekick.webui.enabled=true \\ --set tty=true ``` Then create a port-forward to access it: kubectl port-forward svc falco-falcosidekick-ui 2802:2802 -n falco. The default credentials are admin/admin. The full documentation and the repository of the project are here. Let us know! You feedback will help us to improve the content and to stay in touch with our users. Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "Provisioning", "file_name": "api-reference.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Generating API tokens and utilizing endpoints for custom integrations. To use CI/CD Scanning or integrate with many of FOSSA's services, you must provision API tokens. FOSSA allows users to create API tokens to access the API. To create a token, visit your Account Settings: To use the API token for fossa-cli or many of our client integrations, you must set the FOSSAAPIKEY environment variable or pass it directly to the tool/integration. To authenticate and access our API, include an Authorization header in the request: curl -H \"Authorization: Bearer <token>\" \"https://app.fossa.com/<API endpoint>\" Creating a push only API token restricts the users access to only allow uploading builds. The API token will be restricted from reading anything about the project or editing existing information. This token was created with open source project maintainers in mind. The FOSSA API key is required to be set as an environment variable or included in the configuration file whenever integrating FOSSA with a CI system, such as TravisCI. This has the unfortunate side effect of exposing the API key to anyone who makes a pull request. Restricting a user's access with a push only API token is the best way to combat any malicious actors. The steps to create one are as follows: Try it out! Try running FOSSAAPIKEY=<pushonlytoken> fossa report licenses to see what happens when you attempt to access restricted information. The FOSSA API is available for enterprise customers to build custom integrations. FOSSA provides an API to access one of the largest databases of open source projects and metadata in the world. Currently, our registry hosts data on over 23 million components totaling beyond 5TB of data. In addition, our service offers API endpoints by which you can programmatically fetch data about your project and our analysis of it to automate parts of your workflow including: Contact [emailprotected] for more information. Updated 2 months ago" } ]
{ "category": "Provisioning", "file_name": "aws-codebuild.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA support for Python projects FOSSA supports Python projects through setuptools, pip, poetry, and pipenv | Tool | Quick Import (app.fossa.com) | CLI (fossa-cli) | |:|:-|:-| | pip | requirements.txt and setup.py | req*.txt and setup.py | | setuptools/distutils | setup.py | nan | | distribute | nan | nan | | poetry | nan | pyproject.toml and poetry.lock | | pipenv | nan | Pipfile.lock | | conda | nan | environment.yml | Requires Standard Conventions FOSSA currently assumes that Python codebases using Repository Scanning are following proper conventions where running setup.py or pip install -r <requirements.txt> is expected. If setup.py files are heavily customized or require non-standard versions of Python, FOSSA may fail to run and analyze them. When Python code is imported, FOSSA will find and run any setup.py files and recursively traverse dependencies that are brought in via the install_requires parameter. If there are any requirements.txt present, FOSSA will also resolve those entries and treat them as direct dependencies. Sub-dependencies of packages brought in from requirements.txt are ignored, as consistent with standard build behavior. Complex Builds Supported For complex Python builds that rely on custom tooling, scripts or virtual env, CI/CD Scanning is the ideal integration path. To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` Once installed, run fossa analyze inside of your repo's root directory. View extended documentation here. You can configure FOSSA to fetch dependencies from private PyPI registries published through tools like Artifactory or Sonatype Nexus. In order for FOSSA to reach private feeds, go to your Python Language Settings under Account Settings > Languages > Python and add your login credentials. Pip Settings Now you should be able to resolve private PyPI packages in FOSSA. FOSSA supports most standard ways Python packages can be included, ranging from packages on PyPI to packages stored in archives / VCS hosts. When possible, FOSSA will seek source code formats over binary/archive formats like .egg and .whl. If an egg or wheel is downloaded, its contents are inspected for code auditing and dependency information. | VCS | Supported | |:|:| | Git | Y | | hg | N | | svb | N | | bzr | N | Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "atlassian-jira.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA supports JavaScript and Node.js codebases through NPM, Yarn, and Pnpm. | Tool | Quick Import (app.fossa.com) | CLI (fossa-cli) | |:-|:--|:--| | npm | package.json, package-lock.json | package.json, package-lock.json | | Yarn | yarn.lock | package.json, yarn.lock | | Pnpm | nan | pnpm-lock.yaml | | Bower | bower.json | nan | If you use FOSSA's automated build infrastructure, FOSSA will resolve dependencies by attempting to build your codebase via npm install --production or yarn install --frozen-lockfile. If this fails or is disabled by setting prefermediateddependencies to false, FOSSA will fall back to statically analyzing and traversing your package manifests (package.json, yarn.lock, component.json, bower.json). By default, FOSSA filters out any devDependencies entries. If you are using FOSSA's automated builds, FOSSA will prefer the lockfiles you provide. If you are using have build scripts that will edit your build behavior, it is recommended that you use Provided Builds. To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` Once installed, run fossa analyze inside of your repo's root directory. You can view further documentation on our implementation, as well as inspect the code directly. You can configure authentication to enable FOSSA to fetch dependencies from authenticated registries such as private npm packages, private Artifactory instances, or npm Enterprise instances. In order for FOSSA to reach privately-scoped packages on [npmjs.com], go to your Javascript Language Settings under Account Settings > Languages > Javascript and add your login credentials: npm Authentication Settings After hitting \"Save\", you should be able to \"retry\" any unreachable npm dependencies in FOSSA and begin to analyze them. Finding Access Credentials If you don't know your credentials, you can find them in .npmrc or ~/.npmrc after running npm login. Learn more. On-Prem Only npm Enterprise and Artifactory-configured npm registires are only supported in FOSSA on-prem. To configure authentication on-prem, your FOSSA admin must edit FOSSA's config.env file with one of two authentication methods. Check your .npmrc to see which of the two formats below you use. For newer registries or NPM Enterprise, FOSSA supports tokens for authentication. If you are using this method, you can find a line in your .npmrc formatted as //REGISTRYURL/:authToken=AUTH_TOKEN. Take the AUTH_TOKEN and add the following config: ``` fetchersnpmauthtoken=AUTHTOKEN ``` Many systems still use legacy authentication, especially if you are using a private registry like Artifactory. Look for email, _auth and username in your .npmrc. ``` fetchersnpmauthemail fetchersnpmauthtoken # _auth parameter in .npmrc fetchersnpmauthusername ``` After configuring, your FOSSA admin must run fossa restart. If you are using a private registry like Artifactory for you NPM code, your FOSSA admin can specify a private registry URL: ``` fetchersnpmregistry=YOURREGISTRYURL ``` Often private registries require authentication, which is covered above under Private Packages. See here for FOSSA's NPM Enterprise integration. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "azure-repos.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA support for Python projects FOSSA supports Python projects through setuptools, pip, poetry, and pipenv | Tool | Quick Import (app.fossa.com) | CLI (fossa-cli) | |:|:-|:-| | pip | requirements.txt and setup.py | req*.txt and setup.py | | setuptools/distutils | setup.py | nan | | distribute | nan | nan | | poetry | nan | pyproject.toml and poetry.lock | | pipenv | nan | Pipfile.lock | | conda | nan | environment.yml | Requires Standard Conventions FOSSA currently assumes that Python codebases using Repository Scanning are following proper conventions where running setup.py or pip install -r <requirements.txt> is expected. If setup.py files are heavily customized or require non-standard versions of Python, FOSSA may fail to run and analyze them. When Python code is imported, FOSSA will find and run any setup.py files and recursively traverse dependencies that are brought in via the install_requires parameter. If there are any requirements.txt present, FOSSA will also resolve those entries and treat them as direct dependencies. Sub-dependencies of packages brought in from requirements.txt are ignored, as consistent with standard build behavior. Complex Builds Supported For complex Python builds that rely on custom tooling, scripts or virtual env, CI/CD Scanning is the ideal integration path. To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` Once installed, run fossa analyze inside of your repo's root directory. View extended documentation here. You can configure FOSSA to fetch dependencies from private PyPI registries published through tools like Artifactory or Sonatype Nexus. In order for FOSSA to reach private feeds, go to your Python Language Settings under Account Settings > Languages > Python and add your login credentials. Pip Settings Now you should be able to resolve private PyPI packages in FOSSA. FOSSA supports most standard ways Python packages can be included, ranging from packages on PyPI to packages stored in archives / VCS hosts. When possible, FOSSA will seek source code formats over binary/archive formats like .egg and .whl. If an egg or wheel is downloaded, its contents are inspected for code auditing and dependency information. | VCS | Supported | |:|:| | Git | Y | | hg | N | | svb | N | | bzr | N | Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "bitbucket-server-stash.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "This guide is for your Bitbucket Server/Atlassian Stash admin to set up FOSSA On-Prem's access to your internal code. Note: This was written for Bitbucket Server v4.0.6+ You first need to add an application link so that users with a login on Bitbucket Server can view their projects through FOSSA. ``` Fill in \"fossa\" for all options: ``` Create a Public key ``` openssl genrsa -out privkey.pem 2048 openssl rsa -pubout -in privkey.pem -out pubkey.pem ``` ``` Consumer Key: fossa Consumer Name: fossa Public Key: MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCqGKukO1De7zhZj6+H0qtjTkVxwTCpvKe4eCZ0FPqri0cb2JZfXJ/DgYSF6vUpwmJG8wVQZKjeGcjDOL5UlsuusFncCzWBQ7RKNUSesmQRMSGkVb1/3j+skZ6UtW+5u09lHNsj6tQ51s1SPrCBkedbNf0Tp0GbMJDyR4e9T04ZZwIDAQAB ``` Now users can successfully connect their Bitbucket Server accounts with FOSSA. FOSSA currently requires a companion bot account on your Bitbucket Server instance with global read access to analyze all internal repositories. This will be replaced in future updates, but is currently required for FOSSA to fetch code. Go to Settings > Accounts > Users > Create User. For username/password, use the bitbucketserver_credentials config in FOSSA's config.env (default below): ``` bitbucketservercredentialsbasic_username=fossabot bitbucketservercredentialsbasic_password=fossa123 ``` Ensure fossabot has global read access fossabot needs to be able to clone any repository in your instance of Bitbucket Server. The easiest way of doing this is giving the account admin privelages in Settings > Accounts > Global Permissions: ``` If you need to custom-configure a role for `fossabot`, make sure the account still has global read afterwards (i.e. try cloning repos across different projects as `fossabot`). ``` Now you should be all set up! Users on FOSSA should be able browse and import their repositories on Bitbucket Server through Bulk Import. NOTE: fossabot is not accessible to average users of FOSSA, but serves as an internal proxy for FOSSA to fetch code. Normal users will only be able to browse and import what they have access to normally through Bitbucket Server. After importing, automatic updates need to be configured manually in two places for each imported project. On FOSSA via Project > Settings > Update Hooks, select \"Select Update Method...\", choose Webhook and hit Save Changes. On Bitbucket Server, install (if not done already) the webhooks module and enable them on each imported project. View guide here. Copy & Paste Webhook Update URL from the first step to the webhooks in Bitbucket under Post-Receive Webhooks > Enable. ``` bitbucketEnterprise: url: http://bitbucket.test.com cloneWithSSH: false # to enable Git clones over SSH, set cloneWithSSH to true and configure fetchers.git. clientId: fossa privateKey: |- --BEGIN RSA PRIVATE KEY-- MIIEpAIBAAKCAQEAnFz3C4zivnXaOCSAMxtXL4bEe7RQTEboUi3JV62o+V5LKxJn 1c0Mp+0SSHS+06i7UPkcCtblZbBMXGupjkMyJ+TGXIKFXdwdY8MyohVtgTcTcKFL HiW9bWbSx5x7zIqS78rNQWxsrBEJZgjDsfALtKXV/t7I1G1tEH84nEwF9D8VP/B8 72xP6vSX2rlYX0IEh9ampEnU+riFnqpYR7CmMkeyrSKOsi6TuqachYIb9qjgNX/o EbNAjLcDdH28S/5nkmIEll+vhVbgeL8DJOt3gufSujE16EgqQN8UFfwRXyoAW1G7 SRBdvDoqu0J5sZvIUpcA5/5tf6EZV0+iTlfcxQIDAQABAoIBADbxI41Xb8TkvEzF 5pYOoU/91sRw01Y6BB/8HqdESf91down53xklHHdB3OWMgdFXqxRG91jLS/SBsLi wa1PRyxlYp3W7u3QDjOjvwLc7KFerOICitaJBEqQureQ8J8qgf7oD79RTc4YHmlP 4xN++V38d3ka5w5ddNk7GrUwsVbk1ur13X+zpccntmwGUx/oXQxNmPF7TKUcKDmy sY2zeOyK1D0I63CHvxxZR3xrUL1jvyEtFdcSNIAwS8kIb+QDlz5O7eFQbhcq4TKA iuK9PMBdQ4GG4H5KNmQgjluT6WDO0yfncmOkGPcRi2O/W3UVNx9znQikTXeulR/u FWbYgAECgYEAz7t6urgSAUV6GrdQbLbegM26VJOWIOeuJlKzKo0NT03IpyKGgY9Y zFc/5c2X0Q7BPOA5Rjw+l35w+flGHs6el0t4AhBA0pD+mZFsJ/rlIlngNnA06a5N LXVLgfsF70jJvfu6T2/L0B08mUpvI3RD41mCYdN7FzkuF3HkMoA4+AECgYEAwLHl rbMBIhppZU59m4CYjcTuaohckVczT+PsYZ5M/6WfL71VkJxUYdK+Z9vE+K1sref4 3ofFMirQG/cOmxVezJCpZXYs6+Zqamr5D4KxGAtCLQACaW3BTjB8MpZg2ENf/iya SUNXACJoqctrg4wWhlaniXdOIVvhz8w3IMahBMUCgYEAzG307rHczjGAY7BJTmN8 fod3OmpvkPxPDtnOBi7/jS7AK3K3qeLXAWlPsahtIkiB9JW455y8ADxnlCkzT3gI 7F1Rwb4a/N3CIIDTTlkDi5WlKA2ulNV6kCThZQ4THhOkrfl/tVMQ4UMUcsqkquBt OtzIidskRIt6B4qGhwhWiAECgYEApKadqald24UT79N8sqXUNLdEPVVdO3d2Sdpo fhUkmAEuHz254kIiPCA2QEpiaVbOmV6woX0Du9UnU+3r1goRodwuUpsC0WNmJJ5Z SK6UogXkuszaQrncxfHZ/ePOxpvzZx03jEh1C5FbO1KtAI9wI8Phji2aXhjDv6ow pNn0dj0CgYAp8meFNCQouZRfnwpytOzt6eQUziliYYAVPJZvM9LfhwPua20dRAJx Sx9v+duVnOePkWNRTOL4meF6zlxq9sCsuO8qtj0X2qYHzts+UP7HtM3yXNtOsxUZ iic9TOz4cCyl2vKaXm8RJ/CxQIxkWmxzOsHigpH8VrzHWugIRQMnyw== --END RSA PRIVATE KEY-- username: fossabot password: fossa123 ``` Note: Use the same \"privateKey\" which you created from step 3 from set-up-application-link If you have any problems, contact support at [emailprotected]. This guide was written for Bitbucket Server v4.0.6+. Updated about 2 months ago" } ]
{ "category": "Provisioning", "file_name": "docs.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "New to FOSSA? Sign up for an account on our website or request a demo. Open source is a critical part of your software. In the average modern software product, over 80% of the source code shipped is derived from open source. Each component can have cascading legal, security, and quality implications for your customers, making it one of the most important things to manage correctly. FOSSA helps you manage your open source components. We plug into your development workflow to help your team automatically track, manage, and remediate issues with the open source you use to: By enabling open source, we help development teams increase development velocity and decrease risk. In this guide, you'll find everything you need to set up FOSSA for your team. Check out our Installation Guide to get your first project imported, monitored, and compliant in 5 minutes. You can stay up to date with FOSSA by following us on Twitter @getfossa, on fossa.com/blog or contacting [emailprotected]. Want to build integrations or products on top of FOSSA? Check out our API & Custom Integrations guide and API Reference. Updated about 1 month ago" } ]
{ "category": "Provisioning", "file_name": "circleci.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Integrating FOSSA with CircleCI This guide is for you to set up a FOSSA project with a CircleCI workflow. The CircleCI integration requires fossa-cli our open source dependency analysis client, to be installed on your CI machine. The client supports all 3 major operating systems (Unix, Darwin/OSX and Windows). To test the CLI, you can install it in your local environment using the command below or download it directly from our Github Releases page. ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash fossa --help ``` First, grab a FOSSA API Key from your FOSSA account under your Integration Settings. NOTE: If you are the maintainer of a public repository you should consider making your API key a Push Only Token. Then, add it to your CircleCI environment variables as FOSSAAPIKEY: Once the environment variable is ready, it's time to edit your .circleci/config.yml file. First, add a step to install fossa-cli when your build starts. Usually the best place to include this is right before the checkout step of your build job when you're still installing the environment pre-reqs: ``` ... jobs: build: ... steps: run: | curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash checkout ... ``` Next, add a step to run the fossa analyze command you just installed in order to upload dependency data from your CircleCI build: ``` run: command: fossa analyze workingdirectory: $YOURCODE_DIRECTORY ``` We recommend inserting this in your .circleci/config.yml file RIGHT AFTER your build/install steps (usually the end of your build section) but BEFORE any tests run. Full Example: ``` version: 2 jobs: build: docker: image: circleci/<language>:<version TAG> steps: run: | curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash checkout run: <build command> run: command: fossa analyze workingdirectory: <repodir> workflows: version: 2 build: jobs: build ``` Now with every CI build, you will be uploading dependency data for analysis back to FOSSA. Customizing with" }, { "data": "To customize your fossa task behavior, add a .fossa.yml file to the root of your VCS. View the .fossa.yml reference on GitHub. You an also create a step in CircleCI that will allow you to pass/fail a build based off your scan status in FOSSA. To accomplish this, simply add a call to fossa test into your test section. ``` run: command: fossa test workingdirectory: <repodir> ``` The fossa test command will poll app.fossa.io or your local FOSSA appliance for updates on your scan status until it gets a response. Then, it will report a relevant exit status to the CI step (to block a failing build) and render rich details about issues directly inline your CircleCI test results. You can customize a timeout on this step using the fossa test --timeout {seconds} flag documented here. The default timeout is set to 600 seconds (10 minutes), but will only be hit in exceptional cases -- most scans should return well under the timeout window. Full Example: ``` version: 2 jobs: build: docker: image: circleci/<language>:<version TAG> steps: run: | curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash checkout run: <build command> run: command: fossa analyze workingdirectory: <repodir> test: docker: image: circleci/<language>:<version TAG> steps: checkout run: <test command> run: command: fossa test workingdirectory: <repodir> workflows: version: 2 buildandtest: jobs: build test ``` In exceptional cases, you may require your CI to tell FOSSA to pull an update for your code. This is not necessary for most users, but can be accomplished if you are using Automated Builds and have no other possible update strategy. To do this, add the following to your circle.yml file: ``` notify: webhooks: url: http://app.fossa.io/hooks/circleci ``` You will also have to update your project settings in FOSSA by navigating to Project > Settings > Update Hooks, and selecting CircleCI in the dropdown. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "github.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Integrating FOSSA with GitHub FOSSA supports and integrates with GitHub tools out of the box. You should be able to sign in with GitHub and immediately get going with importing repos and scanning Pull Requests, but some permission configurations can lead to access issues. If you're in GitHub and not seeing repos or organizations listed, you may need to ensure that your account has the right permissions. Our integration functions as an OAuth App. Under https://github.com/orgs/{YOUR_ORG}/people the user should be listed in your organization. If not, make sure the user is added as a member with global read access. a) First, revoke any existing FOSSA access at https://github.com/settings/applications. b) Then, connect FOSSA back to GitHub at app.fossa.com/projects/import/github but DO NOT authorize yet; stop at this screen: c) Ensure that your organization has access. You should see a green check mark: If not, there should be a \"Request\" or \"Grant\" button that you need to click. You will need an administrator who is logged into that organization to grant access. They can configure third-party access settings at: https://github.com/organizations/{YOURORGANIZATION}/settings/oauthapplication_policy If you have turned on access restriction, ensure that FOSSA is approved: If you already authorized the FOSSA app without also granting our app access to an organization with repositories that you want analyzed, you can still do so by logging in to your own GitHub account and navigating to the Authorized OAuth Apps page: After you click on the FOSSA app, you'll see your organization near the bottom: Click \"Request\" and have an owner of the organization approve the request. You'll then be able to import repositories owned by the organization. GitHub Enterprise (on-prem only) This guide covers integrating an on-prem FOSSA appliance with GitHub Enterprise behind the firewall. To get started, you will have to set up an Oauth App in GitHub. This can be done by navigating to `{GITHUBURL}/organizations/{ORGANIZATIONNAME}/settings/applications: Make sure you configure your Authorization callback URL to point to {FOSSA HOST}/api/services/github/authorize/callback Now that GitHub Enterprise is configured, you will have to add access details to the FOSSA config. SSH into the box hosting FOSSA and edit FOSSA's configuration file (config.env). Find or add the following lines: ``` githubenabled=true githubbaseurl={GITHUBHOST} githubenterprise=true githubcredentialsoauth2clientid={GITHUBCLIENT_ID} githubcredentialsoauth2clientsecret={GITHUBCLIENT_SECRET} githubcredentialsoauth2callback={FOSSA HOST}/api/services/github/authorize/callback ``` If FOSSA is currently running, run fossa restart while still inside of your SSH session and wait for FOSSA to boot up again. Congrats! Now you should be able to connect to Github Enterprise and begin importing from the service. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "gitlab.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Integrating FOSSA with GitHub FOSSA supports and integrates with GitHub tools out of the box. You should be able to sign in with GitHub and immediately get going with importing repos and scanning Pull Requests, but some permission configurations can lead to access issues. If you're in GitHub and not seeing repos or organizations listed, you may need to ensure that your account has the right permissions. Our integration functions as an OAuth App. Under https://github.com/orgs/{YOUR_ORG}/people the user should be listed in your organization. If not, make sure the user is added as a member with global read access. a) First, revoke any existing FOSSA access at https://github.com/settings/applications. b) Then, connect FOSSA back to GitHub at app.fossa.com/projects/import/github but DO NOT authorize yet; stop at this screen: c) Ensure that your organization has access. You should see a green check mark: If not, there should be a \"Request\" or \"Grant\" button that you need to click. You will need an administrator who is logged into that organization to grant access. They can configure third-party access settings at: https://github.com/organizations/{YOURORGANIZATION}/settings/oauthapplication_policy If you have turned on access restriction, ensure that FOSSA is approved: If you already authorized the FOSSA app without also granting our app access to an organization with repositories that you want analyzed, you can still do so by logging in to your own GitHub account and navigating to the Authorized OAuth Apps page: After you click on the FOSSA app, you'll see your organization near the bottom: Click \"Request\" and have an owner of the organization approve the request. You'll then be able to import repositories owned by the organization. GitHub Enterprise (on-prem only) This guide covers integrating an on-prem FOSSA appliance with GitHub Enterprise behind the firewall. To get started, you will have to set up an Oauth App in GitHub. This can be done by navigating to `{GITHUBURL}/organizations/{ORGANIZATIONNAME}/settings/applications: Make sure you configure your Authorization callback URL to point to {FOSSA HOST}/api/services/github/authorize/callback Now that GitHub Enterprise is configured, you will have to add access details to the FOSSA config. SSH into the box hosting FOSSA and edit FOSSA's configuration file (config.env). Find or add the following lines: ``` githubenabled=true githubbaseurl={GITHUBHOST} githubenterprise=true githubcredentialsoauth2clientid={GITHUBCLIENT_ID} githubcredentialsoauth2clientsecret={GITHUBCLIENT_SECRET} githubcredentialsoauth2callback={FOSSA HOST}/api/services/github/authorize/callback ``` If FOSSA is currently running, run fossa restart while still inside of your SSH session and wait for FOSSA to boot up again. Congrats! Now you should be able to connect to Github Enterprise and begin importing from the service. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "java.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Now that we have a safe space to store our API Key, we need to grant the CodeBuild Service access to it. CodeBuild utilizes the buildspec.yml file in the root of your repository to build the project. The stages are defined here and artifacts are extracted. Open your buildspec.yml file. If you do not have this file, create one by following this guide. Add the \"env\" section before \"phases\" if you don't already have it. Add the section \"parameter-store\" within that, and finally, add \"FOSSAAPIKEY: \"FOSSAAPIKEY\"\" below that. It should look like the snippet below. ``` version: 0.2 env: parameter-store: FOSSAAPIKEY: \"FOSSAAPIKEY\" phases: install: commands: ``` In the \"commands\" section under the \"post_build\" section, add the new command bash sca.sh. It should look like the snippet below. ``` post_build: commands: echo Entering post_build phase... echo Build completed on `date` bash sca.sh mv target/ROOT . ``` Note: This file was create by CodeStar and contains steps specific to the provide application. ``` version: 0.2 env: parameter-store: FOSSAAPIKEY: \"FOSSAAPIKEY\" phases: install: commands: pip install --upgrade awscli pre_build: commands: echo Entering pre_build phase... echo Test started on `date` mvn clean compile test build: commands: echo Entering build phase... echo Build started on `date` mvn war:exploded post_build: commands: echo Entering post_build phase... echo Build completed on `date` bash sca.sh mv target/ROOT . artifacts: type: zip files: 'ROOT/WEB-INF/classes/application.properties' 'ROOT/WEB-INF/classes/com/aws/codestar/projecttemplates/HelloWorldAppInitializer.class' 'ROOT/WEB-INF/classes/com/aws/codestar/projecttemplates/configuration/ApplicationConfig.class' 'ROOT/WEB-INF/classes/com/aws/codestar/projecttemplates/configuration/MvcConfig.class' 'ROOT/WEB-INF/classes/com/aws/codestar/projecttemplates/controller/HelloWorldController.class' 'ROOT/WEB-INF/lib/aopalliance-1.0.jar' 'ROOT/WEB-INF/lib/commons-fileupload-1.3.3.jar' 'ROOT/WEB-INF/lib/commons-io-2.5.jar' 'ROOT/WEB-INF/lib/commons-logging-1.2.jar' 'ROOT/WEB-INF/lib/javax.servlet-api-3.1.0.jar' 'ROOT/WEB-INF/lib/spring-aop-4.3.14.RELEASE.jar' 'ROOT/WEB-INF/lib/spring-beans-4.3.14.RELEASE.jar' 'ROOT/WEB-INF/lib/spring-context-4.3.14.RELEASE.jar' 'ROOT/WEB-INF/lib/spring-core-4.3.14.RELEASE.jar' 'ROOT/WEB-INF/lib/spring-expression-4.3.14.RELEASE.jar' 'ROOT/WEB-INF/lib/spring-web-4.3.14.RELEASE.jar' 'ROOT/WEB-INF/lib/spring-webmvc-4.3.14.RELEASE.jar' 'ROOT/WEB-INF/views/index.jsp' 'ROOT/resources/gradients.css' 'ROOT/resources/set-background.js' 'ROOT/resources/styles.css' 'ROOT/resources/tweet.svg' ``` In the buildspec.yml, we reference a file called sca.sh, which does not exist yet. So, let's make it. Create the file in the root directory of the repository, and chmod it to enable execution. ``` touch sca.sh && chmod +x sca.sh ``` Edit the script to include the downloading of the FOSSA CLI and a config file if you don't have it already. It should look something like the file below. ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash fossa analyze ``` Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "golang.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA supports Go codebases through Dep, Go modules, Govendor, Gopkg, and Glide. | Tool | |:--| | Dep | | Go modules | | Govendor | | Gopkg | | Glide | To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` Once installed, run fossa analyze inside of your repo's root directory to analyze your Golang project. You can view our extended documentation for golang here. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "javascript#npm-enterprise.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA supports Go codebases through Dep, Go modules, Govendor, Gopkg, and Glide. | Tool | |:--| | Dep | | Go modules | | Govendor | | Gopkg | | Glide | To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` Once installed, run fossa analyze inside of your repo's root directory to analyze your Golang project. You can view our extended documentation for golang here. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "javascript.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA supports JavaScript and Node.js codebases through NPM, Yarn, and Pnpm. | Tool | Quick Import (app.fossa.com) | CLI (fossa-cli) | |:-|:--|:--| | npm | package.json, package-lock.json | package.json, package-lock.json | | Yarn | yarn.lock | package.json, yarn.lock | | Pnpm | nan | pnpm-lock.yaml | | Bower | bower.json | nan | If you use FOSSA's automated build infrastructure, FOSSA will resolve dependencies by attempting to build your codebase via npm install --production or yarn install --frozen-lockfile. If this fails or is disabled by setting prefermediateddependencies to false, FOSSA will fall back to statically analyzing and traversing your package manifests (package.json, yarn.lock, component.json, bower.json). By default, FOSSA filters out any devDependencies entries. If you are using FOSSA's automated builds, FOSSA will prefer the lockfiles you provide. If you are using have build scripts that will edit your build behavior, it is recommended that you use Provided Builds. To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` Once installed, run fossa analyze inside of your repo's root directory. You can view further documentation on our implementation, as well as inspect the code directly. You can configure authentication to enable FOSSA to fetch dependencies from authenticated registries such as private npm packages, private Artifactory instances, or npm Enterprise instances. In order for FOSSA to reach privately-scoped packages on [npmjs.com], go to your Javascript Language Settings under Account Settings > Languages > Javascript and add your login credentials: npm Authentication Settings After hitting \"Save\", you should be able to \"retry\" any unreachable npm dependencies in FOSSA and begin to analyze them. Finding Access Credentials If you don't know your credentials, you can find them in .npmrc or ~/.npmrc after running npm login. Learn more. On-Prem Only npm Enterprise and Artifactory-configured npm registires are only supported in FOSSA on-prem. To configure authentication on-prem, your FOSSA admin must edit FOSSA's config.env file with one of two authentication methods. Check your .npmrc to see which of the two formats below you use. For newer registries or NPM Enterprise, FOSSA supports tokens for authentication. If you are using this method, you can find a line in your .npmrc formatted as //REGISTRYURL/:authToken=AUTH_TOKEN. Take the AUTH_TOKEN and add the following config: ``` fetchersnpmauthtoken=AUTHTOKEN ``` Many systems still use legacy authentication, especially if you are using a private registry like Artifactory. Look for email, _auth and username in your .npmrc. ``` fetchersnpmauthemail fetchersnpmauthtoken # _auth parameter in .npmrc fetchersnpmauthusername ``` After configuring, your FOSSA admin must run fossa restart. If you are using a private registry like Artifactory for you NPM code, your FOSSA admin can specify a private registry URL: ``` fetchersnpmregistry=YOURREGISTRYURL ``` Often private registries require authentication, which is covered above under Private Packages. See here for FOSSA's NPM Enterprise integration. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "ruby.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA supports Ruby through RubyGems. | Tool | Quick Import (app.fossa.com) | CLI (fossa-cli) | |:--|:--|:-| | bundler | Gemfile, Gemfile.lock or *.gemspec | Gemfile, Gemfile.lock | | gem | Gemfile | Gemfile.lock | When Ruby code is imported, FOSSA will find and run any Gemfile or *.gemspec files and monitor dependency activity. If a Gemfile.lock is present, FOSSA will prefer that for dependency information. To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` In CI/CD Scanning for Ruby, fossa analyze will rely on the output of bundle list to determine what was installed in your build environment. If bundle list command cannot be executed successfully, it will parse Gemfile.lock. View extended documentation here. FOSSA supports fetching private Gems from custom or authenticated sources. You can configure FOSSA's access to private Gem sources in your Ruby Language Settings found at Account Settings > Languages > Ruby: Configuring Private RubyGem Sources Once configured, FOSSA will be able to resolve any previously unreachable Gems. For basic metadata, FOSSA will parse or evaluate all available metadata files for license and authorship information. This includes Gemfile, Gemfile.lock and *.gemspec formats. Since source is generally accessible, FOSSA supports full code auditing on RubyGems and will run license scans / code analysis across all files in a given Gem. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "php.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "FOSSA supports PHP projects through Composer. | Tool | Quick Import (app.fossa.com) | CLI (fossa-cli) | |:|:-|:| | Composer | composer.json | composer.lock | To get started, install the latest release of fossa-cli from our GitHub releases page: ``` curl -H 'Cache-Control: no-cache' https://raw.githubusercontent.com/fossas/fossa-cli/master/install-latest.sh | bash ``` Once installed, run fossa analyze inside of your repo's root directory to analyze your Compose project. You can view our extended documentation here. If an exact version is not given (i.e. a version range), FOSSA will resolve a dependency to the highest version satisfying the constraint compliant to the Composer versioning spec. Currently, Repository Scanning of Composer projects have the following limitations: FOSSA supports any package available on https://packagist.org/. All code within a package is audited for license information. If a license file is declared by the license field in composer.json, it will be elected as a \"Declared License\" or \"Primary License\" in the FOSSA UI. Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "slack.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Integrating FOSSA notifications with Slack channels This guide is for you to set up the FOSSA's issue notifications to publish to Slack channels. You first need to authorize FOSSA to access your Slack team. Navigate to FOSSA Slack Integration Settings. Connect to Slack NOTE: You can set this up for as many channels as you'd like. Now teams can successfully connect their FOSSA projects with Slack. Now that Slack channel settings are configured, you will need to enable Slack Notifications on a project by project basis. To change your project notifications, simply navigate to the settings tab of your project, and select which notifications you would like to be enabled for your Slack channel. After this is configured, you should be all set up. On-Prem Configuration Add this block to input-values.yaml file when you install fossa with helm ``` slack: {} ``` If you have any problems, contact support at [emailprotected] Updated 8 months ago" } ]
{ "category": "Provisioning", "file_name": "supported-languages.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Generating API tokens and utilizing endpoints for custom integrations. To use CI/CD Scanning or integrate with many of FOSSA's services, you must provision API tokens. FOSSA allows users to create API tokens to access the API. To create a token, visit your Account Settings: To use the API token for fossa-cli or many of our client integrations, you must set the FOSSAAPIKEY environment variable or pass it directly to the tool/integration. To authenticate and access our API, include an Authorization header in the request: curl -H \"Authorization: Bearer <token>\" \"https://app.fossa.com/<API endpoint>\" Creating a push only API token restricts the users access to only allow uploading builds. The API token will be restricted from reading anything about the project or editing existing information. This token was created with open source project maintainers in mind. The FOSSA API key is required to be set as an environment variable or included in the configuration file whenever integrating FOSSA with a CI system, such as TravisCI. This has the unfortunate side effect of exposing the API key to anyone who makes a pull request. Restricting a user's access with a push only API token is the best way to combat any malicious actors. The steps to create one are as follows: Try it out! Try running FOSSAAPIKEY=<pushonlytoken> fossa report licenses to see what happens when you attempt to access restricted information. The FOSSA API is available for enterprise customers to build custom integrations. FOSSA provides an API to access one of the largest databases of open source projects and metadata in the world. Currently, our registry hosts data on over 23 million components totaling beyond 5TB of data. In addition, our service offers API endpoints by which you can programmatically fetch data about your project and our analysis of it to automate parts of your workflow including: Contact [emailprotected] for more information. Updated 2 months ago" } ]
{ "category": "Provisioning", "file_name": "privacy-policy.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "Effective date: March 12, 2024 FOSSA, Inc. (\"us\", \"we\", or \"our\") operates the https://fossa.com website (the \"Site\") and provides a cloud-based platform to its customers which assists in the management of open source software (the Platform\" and together with the Site, the Service). This page informs you of our policies regarding the collection, use, and disclosure of personal data when you use our Service and the choices you have associated with that data. We use your data to provide and improve the Service. By using the Service, you agree to the collection and use of information in accordance with this policy. Unless otherwise defined in this Privacy Policy, terms used in this Privacy Policy have the same meanings as in our Terms and Conditions, accessible from https://fossa.com/terms Service Service is the https://fossa.com or https://fossa.io website(s) operated by FOSSA, Inc. and the Platform. Personal Data Personal Data means data about a living individual who can be identified from such data (or from such data and other information either in our possession or likely to come into our possession). Platform The cloud-based platform offered to its customer which assists in the management of open source software and software vulnerabilities. Usage Data Usage Data is data collected automatically either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit). Cookies Cookies are small pieces of data stored on your device (computer or mobile device). Data Controller Data Controller means the natural or legal person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal information is, or will be, processed. For the purpose of this Privacy Policy, we are a Data Controller of your Personal Data. Data Processors (or Service Providers) Data Processor (or Service Provider) means any natural or legal person who processes the data on behalf of the Data Controller. We may use the services of various Service Providers in order to process your data more effectively in connection with the Services. In connection with the Platform, our current sub-processors are: Data Subject (or User) Data Subject is any living individual who is using our Service and is the subject of Personal Data. We collect several different types of information for various purposes to provide and improve our Service to you. Personal Data While using our Service, we may ask you to provide us with certain personally identifiable information that can be used to contact or identify you (\"Personal Data\"). Personally identifiable information may include, but is not limited to: We may use your Personal Data to contact you with newsletters, marketing or promotional materials and other information that may be of interest to you. You may opt out of receiving any, or all, of these communications from us by following the unsubscribe link or instructions provided in any email we send or by contacting us. Usage Data We may also collect information about how the Service is accessed and used (\"Usage Data\"). This Usage Data may include information such as your computer's Internet Protocol address (e.g. IP address), browser type, browser version, the pages of our Service that you visit, the time and date of your visit, the time spent on those pages, unique device identifiers and other diagnostic" }, { "data": "Tracking & Cookies Data We use cookies and similar tracking technologies to track the activity on our Service and hold certain information. Cookies are files with small amount of data which may include an anonymous unique identifier. Cookies are sent to your browser from a website and stored on your device. Tracking technologies also used are beacons, tags, and scripts to collect and track information and to improve and analyze our Service. You can instruct your browser to refuse all cookies or to indicate when a cookie is being sent. However, if you do not accept cookies, you may not be able to use some portions of our Service. Examples of Cookies we use: FOSSA, Inc. uses the collected data for various purposes: If you are from the European Economic Area (EEA), FOSSA, Inc. legal basis for collecting and using the personal information described in this Privacy Policy depends on the Personal Data we collect and the specific context in which we collect it. FOSSA, Inc. may process your Personal Data because: For any questions please email us at [emailprotected] FOSSA, Inc. will retain your Personal Data only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use your Personal Data to the extent necessary to comply with our legal obligations (for example, if we are required to retain your data to comply with applicable laws), resolve disputes, and enforce our legal agreements and policies. FOSSA, Inc. will also retain Usage Data for internal analysis purposes. Usage Data is generally retained for a shorter period of time, except when this data is used to strengthen the security or to improve the functionality of our Service, or we are legally obligated to retain this data for longer time periods. Your information, including Personal Data, may be transferred to and maintained on computers located outside of your state, province, country or other governmental jurisdiction where the data protection laws may differ from those from your jurisdiction. If you are located outside the United States and choose to provide information to us, please note that we and our sub-processors will process the data, including Personal Data, in the United States. FOSSA, Inc. will take all steps reasonably necessary to ensure that your data is treated securely and in accordance with this Privacy Policy and no transfer of your Personal Data will take place to an organization or a country unless there are adequate controls in place including the security of your data and other personal information. In relation to the onward transfer, FOSSA, Inc. is responsible for the processing of personal data it receives under the EU - U.S. and Swiss - U.S. Privacy Shield frameworks and subsequently transfers to a third party acting as an agent on its behalf. We comply with the Privacy Shield Principles for all onward transfers of personal data from the EU and Switzerland, including the onward transfer liability provisions. In most cases, FOSSA maintains contracts with these third parties restricting their access, use, and disclosure of personal data in compliance with FOSSAs Privacy Shield obligations. If FOSSA, Inc. is involved in a merger, acquisition or asset sale, your Personal Data may be transferred. We will provide notice before your Personal Data is transferred and becomes subject to a different Privacy Policy. Under certain circumstances, FOSSA," }, { "data": "may be required to disclose your Personal Data if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency). FOSSA, Inc. may disclose your Personal Data in the good faith belief that such action is necessary to: The security of your data is important to us, but remember that no method of transmission over the Internet, or method of electronic storage is 100% secure. While we strive to use commercially acceptable means to protect your Personal Data, we cannot guarantee its absolute security. We do not support Do Not Track (\"DNT\"). Do Not Track is a preference you can set in your web browser to inform websites that you do not want to be tracked. You can enable or disable Do Not Track by visiting the Preferences or Settings page of your web browser. FOSSA, Inc complies with the EU-U.S. Privacy Shield Framework and Swiss-U.S. Privacy Shield Framework as set forth by the U.S. Department of Commerce regarding the collection, use, and retention of personal information transferred from the European Union and Switzerland to the United States. FOSSA, Inc has certified to the Department of Commerce that it adheres to the Privacy Shield Principles. If there is any conflict between the terms in this privacy policy and the Privacy Shield Principles, the Privacy Shield Principles shall govern. To learn more about the Privacy Shield program, and to view our certification, please visit https://www.privacyshield.gov/ If you are a resident of the European Economic Area (EEA) or Switzerland, you have certain data protection rights. FOSSA, Inc. aims to take reasonable steps to allow you to correct, amend, delete, or limit the use of your Personal Data. If you wish to be informed what Personal Data we hold about you and if you want it to be removed from our systems, please contact us. In all circumstances, individuals will always reserve the right to access their personal data. In certain circumstances, you have the following data protection rights: The right to access, update or to delete the information we have on you.Whenever made possible, you can access, update or request deletion of your Personal Data directly within your account settings section. If you are unable to perform these actions yourself, please contact us to assist you. The right of rectification.You have the right to have your information rectified if that information is inaccurate or incomplete. The right to object.You have the right to object to our processing of your Personal Data. The right of restriction.You have the right to request that we restrict the processing of your personal information. The right to data portability.You have the right to be provided with a copy of the information we have on you in a structured, machine-readable and commonly used format. The right to withdraw consent.You also have the right to withdraw your consent at any time where FOSSA, Inc. relied on your consent to process your personal information. Please note that we may ask you to verify your identity before responding to such requests. You have the right to complain to a Data Protection Authority about our collection and use of your Personal Data. For more information, please contact your local data protection authority in the European Economic Area (EEA)) or" }, { "data": "We may employ third party companies and individuals to facilitate our Service (\"Service Providers\"), to provide the Service on our behalf, to market our Service, and to perform Service-related services or to assist us in analyzing how our Service is used. These third parties have access to your Personal Data only to perform these tasks on our behalf and are obligated not to disclose or use it for any other purpose. We may use third-party Service Providers to monitor and analyze the use of our Service. Google Analytics Google Analytics is a web analytics service offered by Google that tracks and reports website traffic. Google uses the data collected to track and monitor the use of our Service. This data is shared with other Google services. Google may use the collected data to contextualize and personalize the ads of its own advertising network. You can opt-out of having made your activity on the Service available to Google Analytics by installing the Google Analytics opt-out browser add-on. The add-on prevents the Google Analytics JavaScript (ga.js, analytics.js, and dc.js) from sharing information with Google Analytics about visits activity. For more information on the privacy practices of Google, please visit the Google Privacy & Terms web page:http://www.google.com/intl/en/policies/privacy/ FOSSA, Inc. uses remarketing services to advertise on third party websites to you after you visited our Service, including those remarketing services set forth below. We and our third-party vendors use cookies to inform, optimize and serve ads based on your past visits to our Service. Google AdWords Google AdWords remarketing service is provided by Google Inc. You can opt-out of Google Analytics for Display Advertising and customize the Google Display Network ads by visiting the Google Ads Settings page:http://www.google.com/settings/ads Google also recommends installing the Google Analytics Opt-out Browser Add-on -https://tools.google.com/dlpage/gaoptout- for your web browser. Google Analytics Opt-out Browser Add-on provides visitors with the ability to prevent their data from being collected and used by Google Analytics. For more information on the privacy practices of Google, please visit the Google Privacy & Terms web page:http://www.google.com/intl/en/policies/privacy/ Twitter Twitter remarketing service is provided by Twitter Inc. You can opt-out from Twitter's interest-based ads by following their instructions:https://support.twitter.com/articles/20170405 You can learn more about the privacy practices and policies of Twitter by visiting their Privacy Policy page:https://twitter.com/privacy Facebook Facebook remarketing service is provided by Facebook Inc. You can learn more about interest-based advertising from Facebook by visiting this page:https://www.facebook.com/help/164968693837950 To opt-out from Facebook's interest-based ads follow these instructions from Facebook:https://www.facebook.com/help/568137493302217 Facebook adheres to the Self-Regulatory Principles for Online Behavioral Advertising established by the Digital Advertising Alliance. You can also opt-out from Facebook and other participating companies through the Digital Advertising Alliance in the USAhttp://www.aboutads.info/choices/, the Digital Advertising Alliance of Canada in Canadahttp://youradchoices.ca/or the European Interactive Digital Advertising Alliance in Europehttp://www.youronlinechoices.eu/, or opt-out using your mobile device settings. For more information on the privacy practices of Facebook, please visit Facebook's Data Policy:https://www.facebook.com/privacy/explanation Retention.com Retention.com is an online data partner that provides us with information they have in their databases about individuals who are associated with activity on or visitors to our Site and/or Platform (only in the US), which may include Personal Data. We may use this information to remarket and send emails and advertising to you directly and through our service providers. To learn more about the privacy practices of Retention.com, please review their Privacy Policy:" }, { "data": "You may opt out of having your information associated with their databases and prevent certain other activities by visiting https://app.retention.com/optout and https://app.retention.com/ccpa_details/ We may provide paid products and/or services within the Service. In that case, we use third-party services for payment processing (e.g. payment processors). We will not store or collect your payment card details. That information is provided directly to our third-party payment processors whose use of your personal information is governed by their Privacy Policy. These payment processors adhere to the standards set by PCI-DSS as managed by the PCI Security Standards Council, which is a joint effort of brands like Visa, Mastercard, American Express and Discover. PCI-DSS requirements help ensure the secure handling of payment information. The payment processors we work with are: Stripe Their Privacy Policy can be viewed athttps://stripe.com/us/privacy Our Service may contain links to other sites that are not operated by us. If you click on a third party link, you will be directed to that third party's site. We strongly advise you to review the Privacy Policy of every site you visit. We have no control over and assume no responsibility for the content, privacy policies or practices of any third party sites or services. Our Service does not address anyone under the age of 18 (\"Children\"). We do not knowingly collect personally identifiable information from anyone under the age of 18. If you are a parent or guardian and you are aware that your Children has provided us with Personal Data, please contact us. If we become aware that we have collected Personal Data from children without verification of parental consent, we take steps to remove that information from our servers. We may update our Privacy Policy from time to time. We will notify you of any changes by posting the new Privacy Policy on this page. We will let you know via email and/or a prominent notice on our Service, prior to the change becoming effective and update the \"effective date\" at the top of this Privacy Policy. You are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page. As part of our commitment to the Privacy Shield Principles, if you are a resident of the European Union or Switzerland and you have a privacy or data use concern, please contact FOSSA directly at [emailprotected] and FOSSA will use its best efforts to address your concern within 45 days of receipt of your complaint. For an unresolved privacy or data use concern that FOSSA has not addressed satisfactorily, please contact our U.S. based third-party dispute resolution provider (free of charge) athttps://www.jamsadr.com/eu-us-privacy-shield. For any Privacy Shield disputes that cannot be resolved by the methods above, you may be able to invoke a binding arbitration process under certain conditions. To find out more about the Privacy Shields binding arbitration scheme, please see:https://www.privacyshield.gov/article?id=ANNEX-I-introduction. The Federal Trade Commission has investigation and enforcement authority over FOSSAs compliance with the Privacy Shield Framework. If you have any questions about this Privacy Policy, please contact us: [emailprotected] If you have any comments, concerns or questions about our privacy practices in general, please send an email to [emailprotected] or send mail to: FOSSA, Inc. Attn: Privacy 114 Sansome St, Suite 210 San Francisco, CA 94104 Your right to file a complaint with a data supervisory authority about our privacy practices remains unaffected. Updated 3 months ago" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Fugue", "subcategory": "Security & Compliance" }
[ { "data": "Fugue v2022.06.29 Fugue ensures cloud infrastructure stays in continuous compliance with enterprise security policies. Learn more on our product page. Getting Started Quickly create a Fugue environment. Examples Walkthroughs and tutorials. FAQ At-a-glance information. Release Notes The latest Fugue updates. Service Coverage Supported AWS, AWS GovCloud, Azure, and Google Cloud services. Visualizer Explore an interactive diagram of your cloud infrastructure API User Guide Fugue API instructions and examples API Reference Swagger specification Fugue 101 Core concepts for using Fugue Sign up for Fugue Register for a free account" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "GitGuardian", "subcategory": "Security & Compliance" }
[ { "data": "Create a GitGuardian account, verify email, and manage account settings. Integrate GitGuardian with various VCSs, including GitHub, GitLab, Bitbucket, and Azure Repos, and monitor the codebase. Set up a monitored perimeter in GitGuardian, define its scope, and configure monitoring settings for different types of repositories. See how GitGuardian detects secrets and sensitive information in code repositories and alerts users of potential security risks. Create decoy secrets and monitor them for unauthorized use, providing an additional layer of active defense for detecting and preventing supply chain attacks. Protect your infrastructure at the source by finding and fixing IaC misconfigurations before they reach your cloud. Identify vulnerable dependencies early, prioritize your remediation, and ensure the compliance of your third-party components. (ggshield) Detect and prevent 350+ types of hardcoded secrets and 70+ IaC misconfigurations before pushing code to the command line. Configure SAML SSO, manage your email domain, review your workspace audit logs, and transfer workspace ownership. Manage GitGuardian user accounts, including settings for personal information, notifications, and integrations. Set up and run GitGuardian on your infrastructure. See installation, upgrading, troubleshooting, and optimization tips. Use the public GitGuardian API to manage your workspace and incidents programmatically. By submitting this form, I agree to GitGuardians Privacy Policy How can I help you ?" } ]
{ "category": "Provisioning", "file_name": "terms-of-service.md", "project_name": "FOSSA", "subcategory": "Security & Compliance" }
[ { "data": "These Terms of Service (Terms) are provided by FOSSA, Inc. (Fossa, we, our, or us) and govern your use of our website and the software we make available to you and other users (collectively, the Service). To use the Service, Users must at all times agree to and abide by these Terms. The Service allows you to automatically analyze your code and certain other information related to your organization and your code (such code and information, collectively, User Data). Fossa provides a software platform designed to analyze you or your organizations code for compliance with open source licenses. That is, our Service produces insights about the open source code used by your developers and teams and provides simple and effective intelligence to leverage the open source code effectively. Our Service may be made available on a hosted and/or on-premises basis, as set forth in the applicable order form, or otherwise mutually agreed in writing, with Fossa or its authorized resellers. In all cases, your usage of the Service will be subject to these Terms. In addition, certain portions of these Terms will apply only to those Users who have purchased a license to use certain features of the Service that are made available to Users subject to such Users payment of additional fees (the Premium Features). These Terms of Service constitute a legal contract between you and your company, organization, or entity (you or, collectively with other users, Users), on the one hand, and Fossa on the other regarding your and your company, organization, or entitys use of the Service. Fossa may have different roles with respect to different types of Users, and you as used in these Terms will apply to the appropriate type of User under the circumstances (e.g., an individual User and/or a Subscribing Organization). If you are using or opening an account with Fossa on behalf of a company, entity, or organization (collectively, the Subscribing Organization) then you represent and warrant that you: (i) are an authorized representative of that entity with the authority to bind such entity to these Terms; (ii) have read these Terms; (iii) understand these Terms, and (iv) agree to these Terms on behalf of such Subscribing Organization. Please read carefully the following terms and conditions. By registering for and/or using or subscribing to the Service, or by clicking I Agree, or otherwise affirmatively manifesting your intent to be bound by these Terms of Service, you signify that you have read, understood, and agree to be bound by the following terms, including any additional guidelines and any future modifications by Fossa (collectively, the Terms), and to the collection and use of your personal information as set forth in the our Privacy Policy (https://docs.fossa.com/docs/privacy-policy). Please read these Terms carefully to ensure that you understand each provision. This Agreement contains a mandatory individual arbitration and class action/jury trial waiver provision that requires the use of arbitration on an individual basis to resolve disputes, rather than jury trials or class actions. 1.1. General License Grant. Subject to the terms and conditions of these Terms, Fossa hereby grants to you a limited, non-exclusive, non-sublicensable, non-transferable license during your applicable purchased subscription license term to use the Service in the manner contemplated by these Terms and any applicable order form approved by Fossa, solely for your internal business purposes in accordance with the documentation for the applicable portion of the Service; provided that this Section" }, { "data": "does not grant you any right to use any Premium Features unless otherwise mutually agreed in an order form. Users shall have no right to sub-license or resell the Service or any component thereof. 1.2. Premium Features License Grant. Subject to the terms and conditions of these Terms, and in addition to the rights granted above in Section 1.1, to the extent that you have purchased a license to use any set or sub-set of the Premium Features and are current on all amounts due and payable with respect thereto, Fossa hereby grants to you a limited, non-exclusive, non-sublicensable, non-transferable license during your applicable purchased subscription license term to use the Premium Features (only in object code form or as SaaS solution hosted by Fossa, as applicable to your license) in the manner contemplated by these Terms solely for your internal business purposes in accordance with the documentation for the applicable Service. 1.3. Limitations on Use. You may not use the Service except as permitted in this Agreement. Except with Fossas prior written consent or as expressly permitted under this Agreement, you may not: (a) alter, modify or create any derivative works of the Service, the underlying source code, or the documentation in any way, including without limitation customization, translation or localization; (b) port, reverse compile, reverse assemble, reverse engineer, or otherwise attempt to separate any of the components of the Service or derive the source code for the Service (except to the extent applicable laws specifically prohibit such restriction); (c) copy, redistribute, encumber, sell, rent, lease, sublicense, or otherwise transfer rights to the Service or documentation; (d) remove or alter any trademark, logo, copyright or other proprietary notices, legends, symbols or labels in the Service or documentation; (e) permit a number of Users more than has been authorized by Fossa in writing to access or use the service, or otherwise exceed the authorized usage restrictions; (f) use the Service other than in accordance with this Agreement and all applicable law (including, without limitation, all privacy, data protection and intellectual property laws); or (g) access or use the Service or documentation for the purpose of building a product or service with functionality substantially similar to the Service. You may not release the results of any performance or functional evaluation of any of the Service to any third party without prior written approval of Fossa for each such release. You may not cause or permit any third party to do any of the foregoing. User privacy is important to us. Please read our Privacy Policy( https://docs.fossa.com/docs/privacy-policy) carefully for details relating to the collection, use, and disclosure of your personal information. If Fossa or the Service provides professional information (for example, legal information about license compliance), such information is for informational purposes only and should not be construed as professional or legal advice. You should seek independent professional advice from a person who is licensed and/or qualified in the applicable area. 4.1. Certain features of the Service may have their own terms and conditions that you must agree to when you sign up for that particular product, function, feature, or service. Such terms supplement these Terms and are hereby incorporated by reference. 4.2. Support Schedule, Support levels, Support. FOSSA and/or your authorized Fossa Reseller (Fossa Reseller), as applicable, will provide reasonable commercial efforts to support customers in a timely manner. Specific support levels are available for all business and enterprise customers. All business and enterprise customers are supported via online ticket submission with a 72-hour response" }, { "data": "To upgrade and increase support level, please reach out to [emailprotected] or your Fossa Reseller, as applicable, for additional information. 4.3. Support Hours. Our support team is available Monday through Friday from 9:00 am Pacific Standard Time to 6:00 pm Pacific Standard Time. Support Hours are not available during United States Federal Holidays. 4.4. The software associated with the Service may include open source software that may be governed by separate open source licenses, as further defined at https://app.fossa.com/attribution/7a1f1d24-d314-4f3f-8e03-09e84facdcb0. From time to time, you may be invited to try certain products at no charge for a free trial or evaluation period or if such products are not generally available to licensees (collectively, Evaluation Services). Evaluation Services will be designated or identified as beta, pilot, evaluation, trial or the like. Notwithstanding anything to the contrary, Evaluation Services are licensed for your internal evaluation purposes only to determine if you desire to proceed with a further license (and not for production use), are provided as is without warranty or indemnity of any kind, and may be subject to additional terms. Unless otherwise stated, any Evaluation Service trial period shall expire thirty (30) days from the trial start date. Notwithstanding the foregoing, Fossa may discontinue Evaluation Services at any time at its sole discretion and might never make any Evaluation Service generally available. Fossa will have no liability for any harm or damage arising out of or in connection with any Evaluation Service. You agree that Fossa, in its sole discretion and for any or no reason, may terminate any account (or any part thereof) you may have with Fossa and/or these Terms. In addition, Fossa reserves the right to discontinue any aspect of the Service at any time, including the right to discontinue the display and analysis of any User Data. You agree that any termination of your access to the Service or any account you may have or portion thereof may be affected without prior notice, and you agree that Fossa will not be liable to you or any third party for such termination. Any suspected fraudulent, abusive, or illegal activity that may be grounds for termination of your use of the Service may be referred to appropriate law enforcement authorities. These remedies are in addition to any other remedies Fossa may have at law or in equity. At the end of your subscription license term, or upon any other termination or expiration of these Terms, you shall cease accessing and all use of the Services, and shall return any software and other Materials relating to the Service to Fossa. If you have purchased a license to use any paid features of the Service, such as Premium Features, the following provisions apply to you: 7.1. You are responsible for paying any applicable fees at our then-current standard price list or as otherwise mutually agreed in an order form executed by Fossa and/or your Fossa Reseller and applicable taxes associated with the Service in a timely manner with a valid payment method. Unless otherwise stated, all fees are quoted in U.S. Dollars. All payments must be made electronically by the methods specified by us either on our website or via the Service or by the methods specified by your Fossa Reseller. You agree that we or your Fossa Reseller may charge your selected payment method for any such fees owed. You are required to keep your billing information current, complete, and accurate" }, { "data": "a change in billing address, credit card number, or expiration date) and to notify Fossa or your Fossa Reseller if your selected payment method is cancelled (e.g., for loss or theft). All fees and charges are earned upon receipt by us and are nonrefundable (and there are no credits) except (a) as expressly set forth herein, and/or (b) as required by applicable law. 7.2. You are responsible for all charges incurred under your account made by you or anyone who uses your account (including your co-workers, colleagues, team-members, etc.). If your payment method fails or you are past due on amounts owed, we or your Fossa Reseller may collect fees owed using other collection mechanisms. Your account may be deactivated without notice to you if payment is past due, regardless of the dollar amount. You are also responsible for paying any governmental taxes imposed on your use of the Service, including, but not limited to, sales, use, or value-added taxes. To the extent Fossa or your Fossa Reseller is obligated to collect such taxes, the applicable tax will be added to your invoices and shall by paid from your billing account. 7.3. In the event that you license the Services directly from Fossa (and not through a Fossa Reseller), authorization to charge your chosen payment method account will remain in effect until you cancel or modify your preferences within the Service; provided, however, that such notice will not affect charges submitted before Fossa could reasonably act. Your charges may be payable in advance, in arrears, per usage, or as otherwise described when you ordered the applicable service or at https://fossa.com/pricing, as appliable. You agree that charges may be accumulated as incurred and may be submitted as one or more aggregate charges during or at the end of the applicable billing cycle. 7.4. Fossa reserves the right to change the amount of, or basis for determining, any fees or charges for the Service we provide, and to institute new fees, charges, or terms effective upon prior notice to our Users. You will receive notice of any fee change at least five (5) days before the scheduled date of the transaction and failure to cancel your account as set forth herein will constitute acceptance of such fee change. Any changes to fees will apply only on a prospective basis. If you do not agree to any such changes to fees, charges, or terms, your sole remedy is to cancel your subscription. Fees paid for any subscription term are paid in advance and are not refundable in whole or in part. If you have a balance due on any Service account, you agree that Fossa can charge these unpaid fees to any payment method that you have previously provided. If you purchase the Services from your Fossa Reseller, the price change policy shall be set forth in the purchase agreement between you and your Fossa Reseller. 7.5. Your Service will be automatically renewed and your credit card account (or other payment method account) will be charged as follows without further authorization from you: (a) every month for monthly subscriptions; (b) upon every one (1) year anniversary for annual subscriptions; (c) such other periodic rate you have selected from among the options offered on the Service. You acknowledge that your subscription is subject to automatic renewals and you consent to and accept responsibility for all related recurring charges to your applicable payment method without further authorization from you and without further notice unless required by" }, { "data": "You acknowledge that the amount of the recurring charge may change if the applicable tax rates change or if there has been a change in the applicable fees. If you purchase the Services from your Fossa Reseller, the renewal policy shall be set forth in the purchase agreement between you and your Fossa Reseller. 7.6. Unless otherwise required by applicable law, for annual subscriptions, you must provide us with written notice of your intention not to renew at least fifteen (15) days prior to the end of the then-current term of your subscription. If you do so, your subscription will be cancelled at the end of the then-current term. If you purchase the Services from your Fossa Reseller, the cancellation policy shall be set forth in the purchase agreement between you and your Fossa Reseller. 7.7. You shall provide all information reasonably requested by Fossa to verify your compliance with the usage restrictions and other terms and conditions of this Agreement. 8.1. Use of User Data. By analyzing your User Data with the Service, you hereby grant, and represent and warrant that you have all rights necessary to grant, all rights and licenses to the User Data required for Fossa to provide the features and functionality of the Service. While your User Data will remain on your systems, Fossa collects certain information about your use of the Service, including aggregated data about your User Data. You agree that Fossa may collect, analyze, and use data derived from User Data, as well as data about you, and other Users access and use of the Service, for purposes of operating, analyzing, improving, or marketing the Service and any related services. If Fossa shares or publicly discloses information (e.g., in marketing materials, or in application development) that is derived from User Data, such data will be aggregated or anonymized to reasonably avoid identification of a specific individual or the User. By way of example and not limitation, Fossa may: (a) track the number of Users on an anonymized aggregate basis as part of Fossas marketing efforts to publicize the total number of Users of the Service; (b) analyze aggregated usage patterns for product development efforts; or (c) use aggregated or anonymous data derived from User Data in a form which may not reasonably identify either a particular individual or the User to develop further analytic frameworks and application tools. You further agree that Fossa will have the right, both during and after the term of these Terms, to use, store, transmit, distribute, modify, copy, display, sublicense, and create derivative works of the anonymized, aggregated data. 8.2. Your Responsibilities for User Data. In connection with User Data, you hereby represent, warrant, and agree that: (a) you have obtained the User Data lawfully, and the User Data does not and will not violate any applicable laws or any person or entitys proprietary or intellectual property rights; (b) the User Data is free of all viruses, Trojan horses, and other elements that could interrupt or harm the systems or software used by Fossa or its subcontractors to provide the Service; (c) you are solely responsible for ensuring compliance with all privacy laws in all jurisdictions that may apply to User Data provided hereunder; (d) Fossa may exercise the rights in User Data granted hereunder without liability or cost to any third party; and (e) the User Data complies with the terms of these" }, { "data": "For purposes of clarity, Fossa takes no responsibility and assumes no liability for any User Data, and you will be solely responsible for your User Data. You may not submit any User Data that includes any information that can be used to identify, locate, or contact any of your employees, customers, users or potential customers or users, including: (1) first and last name; (2) home or other physical address; (3) telephone number; (4) email address or online identifier associated with an individual; (5) social security number, passport number, drivers license number, or similar identifier; (6) credit or debit card number; (7) employment, financial or health information; or (8) any other information relating to an individual, including cookie information and usage and traffic data or profiles, that is combined with any of the foregoing (collectively, Personal Data) without Fossas prior written approval. 8.3. Rights to User Data. For purposes of clarity, you own all right, title and interest (including all intellectual property rights) in and to your User Data. The Service is owned and operated by Fossa. The visual interfaces, graphics, design, compilation, information, computer code, products, software, services, and all other elements of the Service provided by Fossa, but expressly excluding any of the foregoing owned or licensed by and posted to the Service at the direction of Users (including without limitation User Data) (Materials) are protected by intellectual property and other applicable laws. Except for any technology licensed by Fossa, which is owned by and provided by our third-party licensors, all Materials contained in the Service, including without limitation the intellectual property rights therein and thereto, are the property of Fossa or its subsidiaries or affiliated companies. All trademarks, service marks, and trade names are proprietary to Fossa or its affiliates and/or third-party licensors. Except as expressly provided herein, nothing in these Terms shall be deemed to create a license in or under any such Materials or the intellectual property rights therein or thereto, you agree not to sell, license, distribute, copy, modify, publicly perform or display, transmit, publish, edit, adapt, create derivative works from, or otherwise make unauthorized use of the Materials. For the avoidance of doubt, as between the parties, Fossa retains all right, title and interest in and to the Service, and all improvements, modifications, and derivatives thereof, and all intellectual property rights relating to the foregoing. You shall not use or disclose any Fossa software or Materials except as expressly authorized herein. Neither party may use the other partys name, logo or marks without such other partys written pre-approval; provided that Fossa may: (a) issue one (1) or more press releases or similar materials announcing that you are a customer and user of the Service; (2) refer to you or your usage on its customer lists, website, and other marketing materials; and (3) develop use cases based on your use of the Service, with respect to which you will provide all reasonable cooperation requested by Fossa. You may choose to or we may invite you to submit comments or ideas about the Service, including without limitation about how to improve the Service or our products (Ideas). By submitting any Idea, you agree that your disclosure is gratuitous, unsolicited and without restriction and will not place Fossa under any fiduciary or other obligation, and that we are free to use, exercise and exploit the Idea on a nonexclusive basis without any additional compensation to you, and/or to disclose the Idea on a non-confidential basis or otherwise to" }, { "data": "You further acknowledge that, by acceptance of your submission, Fossa does not waive any rights to use similar or related ideas previously known to Fossa, or developed by its employees, or obtained from sources other than you. 10.1. The Service may call the servers of other websites or services solely at the direction of and as a convenience to Users (Third-Party Sites). Fossa makes no express or implied warranties with regard to the information, or other material, products, or services that are contained on or accessible through Third-Party Sites. Access and use of Third-Party Sites, including the information, material, products, and services on such sites or available through such sites, is solely at your own risk. Further, you acknowledge that the Service may contain copyrighted software of our suppliers which are obtained under a license from such suppliers (Third-Party Software). All third-party licensors and suppliers retain all right, title and interest in and to such Third-Party Software and all copies thereof, including all copyright and other intellectual property rights. Your use of any Third-Party Software shall be subject to, and you shall comply with, the terms and conditions of this Agreement, and the applicable restrictions and other terms and conditions set forth in any Third-Party Software documentation or printed materials, including without limitation an end user license agreement. 10.2. You acknowledge that Fossa does not manage or control the User Data that you analyze with the Service, and accepts no responsibility or liability for that information regardless of whether such User Data is analyzed by you in breach of these Terms. We have implemented commercially reasonable technical and organizational measures designed to secure information you provide us from accidental loss and from unauthorized access, use, alteration or disclosure. However, we cannot guarantee that unauthorized third parties will never be able to defeat those measures or use your information for improper purposes. You understand that internet technologies have the inherent potential for disclosure. You acknowledge that you are under no obligation to provide Personal Data or other sensitive information in order to use the Service and that you provide any such information at your own risk. 12.1. The Service and any third-party or User Data, software, services, or applications made available in conjunction with or through the Service is provided as is and as available without warranties of any kind either express or implied. To the fullest extent permissible pursuant to applicable law, Fossa, its suppliers, licensors, and partners disclaim all warranties, statutory, express or implied, including, but not limited to, implied warranties of merchantability, fitness for a particular purpose, and non-infringement of proprietary rights. 12.2. Fossa, its suppliers, licensors, and partners do not warrant that the functions contained in the Service will be uninterrupted or error-free, that the Service will meet your requirements, that defects will be corrected, or that the Service or the server that makes it available is free of viruses or other harmful components. You acknowledge that Fossa Resellers have no authority to make any representations, warranties, or guarantees about the Service. 12.3. Fossa, its suppliers, licensors, and partners do not warrant or make any representations regarding the use or the results of the use of the Service in terms of correctness, accuracy, reliability, or" }, { "data": "You understand and agree that you download or otherwise obtain third-party or User Data, material, or data through the use of the Service at your own discretion and risk and that you will be solely responsible for any damage to your computer system or loss of data that results from the download of such third-party or User provided information, material, or data. Fossa will not be responsible or liable for the deletion, correction, destruction, damage, loss, or failure to store or maintain any third-party or User Data. 12.4. Certain state laws do not allow limitations on implied warranties or the exclusion or limitation of certain damages. If these laws apply to you, some or all of the above disclaimers, exclusions, or limitations may not apply to you, and you might have additional rights. 13.1. Under no circumstances, including, but not limited to, negligence, will Fossa or its affiliates, contractors, employees, agents, or third-party partners, licensors, or suppliers be liable for any special, indirect, incidental, consequential, punitive, reliance, or exemplary damages (including without limitation losses or liability resulting from loss of data, loss of revenue, anticipated profits, or loss of business opportunity) that result from your use or your inability to use the information or materials on the Service, or any other interactions with Fossa, even if Fossa or a Fossa authorized representative has been advised of the possibility of such damages. Applicable law may not allow the limitation or exclusion of liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. In such cases, Fossas liability will be limited to the fullest extent permitted by applicable law. 13.2. In no event will Fossas or its affiliates, contractors, employees, agents, or third-party partners, licensors, or suppliers total liability to you for all damages, losses, and causes of action arising out of or relating to these terms or your use of the Service, including without limitation your interactions with other users, (whether in contract, tort including negligence, warranty, or otherwise) exceed the amount paid by you, if any, for accessing the Service during the twelve (12) months immediately preceding the day the act or omission occurred that gave rise to your claim or one hundred dollars, whichever is greater. 13.3. Fossa shall not have any liability for any matter beyond its reasonable control. 13.4. You acknowledge and agree that Fossa and its Fossa Resellers (as applicable) has offered its products and services, set its prices, and entered into these terms in reliance upon the disclaimers of warranty and the limitations of liability set forth herein, that the disclaimers of warranty and the limitations of liability set forth herein reflect a reasonable and fair allocation of risk between the parties (including the risk that a contract remedy may fail of its essential purpose and cause consequential loss), and that the disclaimers of warranty and the limitations of liability set forth herein form an essential basis of the bargain between you and" }, { "data": "You agree to defend, indemnify and hold harmless Fossa and its subsidiaries, agents, managers, and other affiliated companies, and their employees, contractors, agents, officers and directors, from and against any and all claims, damages, obligations, losses, liabilities, costs or debt, and expenses (including but not limited to attorney's fees) arising from: (a) your use of and access to the Service, including any data or work transmitted or received by you; (b) your violation of any term of these Terms, including without limitation, your breach of any of the representations and warranties above; (c) your violation of any third-party right, including without limitation any right of privacy, publicity rights or intellectual property rights; (d) your violation of any law, rule or regulation of the United States or any other country; (e) any claim or damages that arise as a result of any of your User Data or any other data that are submitted via your account; or (f) any other partys access and use of the Service with your unique username, password or other appropriate security code. Fossa will have the right to control the defense, settlement, adjustment or compromise of any such claims, actions or proceedings by using counsel selected by Fossa. Fossa will use reasonable efforts to notify you of any such claims, actions, or proceedings upon becoming aware of the same. The hosted version of the Service is controlled and operated from our facilities in the United States. The on-premises version of the Service is controlled and operated at your facility. Fossa makes no representations that the Service is appropriate or available for use in other locations. Those who access or use the Service from other jurisdictions do so at their own volition and are entirely responsible for compliance with local law, including but not limited to export and import regulations. You may not use the Service if you are a resident of a country embargoed by the United States, or are a foreign person or entity blocked or denied by the United States government. Unless otherwise explicitly stated, all materials found on the Service are solely directed to individuals, companies, or other entities located in the U.S. By using the Service, you are consenting to have your personal data transferred to and processed in the United States. The Service and the underlying information and technology may not be downloaded or otherwise exported or re-exported (a) into (or to a national or resident of) any country to which the U.S. has embargoed goods; or (b) to anyone on the U.S. Treasury Departments list of Specially Designated Nationals or the U.S. Commerce Departments Table of Deny Orders. By downloading or using the Service, you are agreeing to the foregoing and you represent and warrant that you are not located in, under the control of, or a national or resident of any such country or on any such list and you agree to comply with all export laws and other applicable laws. 16.1. Governing Law. This Agreement shall be governed by the internal substantive laws of the State of California, without respect to its conflict of laws principles. Notwithstanding the preceding sentence with respect to the substantive law, any arbitration conducted pursuant to the terms of these Terms shall be governed by the Federal Arbitration Act (9 U.S.C. 1-16). The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. You agree to submit to the personal jurisdiction of the federal and state courts located in Santa Clara County, California for any actions for which we retain the right to seek injunctive or other equitable relief in a court of competent jurisdiction to prevent the actual or threatened infringement, misappropriation or violation of a our copyrights, trademarks, trade secrets, patents, or other intellectual property or proprietary rights, as set forth in the Arbitration provision below, including any provisional relief required to prevent irreparable harm. You agree that Santa Clara County, California is the proper forum for any appeals of an arbitration award or for trial court proceedings if the arbitration provision below is found to be unenforceable. 16.2. Arbitration. Read this section carefully because it requires the parties to arbitrate their disputes and limits the manner in which you can seek relief from" }, { "data": "For any dispute with Fossa, you agree to first contact us at [emailprotected] and attempt to resolve the dispute with us informally. In the unlikely event that Fossa has not been able to resolve a dispute it has with you after sixty (60) days, we each agree to resolve any claim, dispute, or controversy (excluding any claims for injunctive or other equitable relief as provided below) arising out of or in connection with or relating to these Terms, or the breach or alleged breach thereof (collectively, Claims), by binding arbitration by JAMS, under the Optional Expedited Arbitration Procedures then in effect for JAMS, except as provided herein. JAMS may be contacted at www.jamsadr.com. The arbitration will be conducted in Santa Clara County, California, unless you and Fossa agree otherwise. If you are using the Service for commercial purposes, each party will be responsible for paying any JAMS filing, administrative and arbitrator fees in accordance with JAMS rules, and the award rendered by the arbitrator shall include costs of arbitration, reasonable attorneys fees and reasonable costs for expert and other witnesses. Any judgment on the award rendered by the arbitrator may be entered in any court of competent jurisdiction. Nothing in this Section shall be deemed as preventing Fossa from seeking injunctive or other equitable relief from the courts as necessary to prevent the actual or threatened infringement, misappropriation, or violation of our data security, intellectual property or other proprietary rights. 16.3. Class Action/Jury Trial Waiver. With respect to all persons and entities, regardless of whether they have obtained or used the Service for personal, commercial or other purposes, all claims must be brought in the parties individual capacity, and not as a plaintiff or class member in any purported class action, collective action, private attorney general action or other representative proceeding. This waiver applies to class arbitration, and, unless we agree otherwise, the arbitrator may not consolidate more than one persons claims. You agree that, by entering into these Terms, you and Fossa are each waiving the right to a trial by jury or to participate in a class action, collective action, private attorney general action, or other representative proceeding of any kind. 17.1. Notice and Modifications. Fossa may provide you with notices, including those regarding changes to Fossas terms and conditions, by email, regular mail, or postings on the Service. Notice will be deemed given twenty-four hours after email is sent, unless Fossa is notified that the email address is invalid. Alternatively, we may give you legal notice by mail to a postal address, if provided by you through the Service. In such case, notice will be deemed given three days after the date of mailing. Notice posted on the Service is deemed given five (5) days following the initial posting. Fossa reserves the right to determine the form and means of providing notifications to our Users, provided that you may opt out of certain means of notification as described in these Terms. Fossa is not responsible for any automatic filtering you or your network provider may apply to email notifications we send to the email address you provide us. Fossa may, in its sole discretion, modify or update these Terms from time to time, and so you should review this page periodically. When we change the Agreement in a material manner, we will update the last modified date at the bottom of this page and notify you that material changes have been made to the" }, { "data": "Your continued use of the Service after any such change constitutes your acceptance of the new Terms of Service. If you do not agree to any of these terms or any future Terms of Service, do not use or access (or continue to access) the Service. 17.2. U.S. Government End Users. The Service was developed by private financing and constitutes a Commercial Item, as that term is defined at 48 C.F.R. 2.101. The Service consists of Commercial Computer Software and Commercial Computer Software Documentation, as such terms are used in 48 C.F.R. 12.212. Consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4, all U.S. Government End Users acquire only those rights in the Service and the Documentation that are specifically provided by this Agreement. Consistent with 48 C.F.R. 12.211, all U.S. Government End Users acquire only technical data and the rights in that data customarily as specifically provided in this Agreement. 17.3. Waiver. The failure of Fossa to exercise or enforce any right or provision of these Terms will not constitute a waiver of such right or provision. Any waiver of any provision of these Terms will be effective only if in writing and signed by Fossa. 17.4. Severability. If any provision of these Terms is held to be unlawful, void, or for any reason unenforceable, then that provision will be limited or eliminated from these Terms to the minimum extent necessary and will not affect the validity and enforceability of any remaining provisions; except that in the event of unenforceability of the universal Class Action/Jury Trial Waiver, the entire arbitration agreement shall be unenforceable. 17.5. Assignment. These Terms and any rights and licenses granted hereunder, may not be transferred or assigned by you, but may be assigned by Fossa without restriction. 17.6. Survival. Upon termination of these Terms, any provision which, by its nature or express terms should survive, shall survive such termination or expiration, including, but not limited to, all payment obligations and Sections 1.3, 3, 8.1, 9, and 12 through 17. 17.7. Headings. The heading references herein are for convenience only, do not constitute a part of these Terms, and will not be deemed to limit or affect any of the provisions hereof. 17.8. Entire Agreement. This, including the agreements incorporated by reference, constitutes the entire agreement between you and Fossa relating to the subject matter herein and will not be modified except in writing, signed by both parties, or by a change made by Fossa as set forth in these Terms. 17.9. Claims. To the extent permissible under applicable law, you and Fossa agree that any cause of action you may have arising out of or related to the Service must commence within one (1) year after the cause of action accrues. Otherwise, such cause of action is permanently barred. 17.10. Disclosures. The Service is offered by Fossa, Inc., located at 114 Sansome St #210San Francisco, CA 94104, and can be reached via email at [emailprotected]. If you are a California resident, (a) you may have this same information emailed to you by sending a letter to the foregoing address with your email address and a request for this information; and (b) in accordance with Cal. Civ. Code 1789.3, you may report complaints to the Complaint Assistance Unit of the Division of Consumer Services of the California Department of Consumer Affairs by contacting them in writing at 1625 North Market Blvd., Suite N 112 Sacramento, CA 95834, or by telephone at (800) 952-5210 or (916) 445-1254. Updated 8 months ago" } ]