tag
dict
content
listlengths
1
139
{ "category": "App Definition and Development", "file_name": "debugging.md", "project_name": "Squash", "subcategory": "Application Definition & Image Build" }
[ { "data": "Version 1.90 is now available! Read about the new features and fixes from May. The Python extension supports debugging through the Python Debugger extension for several types of Python applications. For a short walkthrough of basic debugging, see Tutorial - Configure and run the debugger. Also see the Flask tutorial. Both tutorials demonstrate core skills like setting breakpoints and stepping through code. For general debugging features such as inspecting variables, setting breakpoints, and other activities that aren't language-dependent, review VS Code debugging. This article mainly addresses Python-specific debugging configurations, including the necessary steps for specific app types and remote debugging. The Python Debugger extension is automatically installed along with the Python extension for VS Code. It offers debugging features with debugpy for several types of Python applications, including scripts, web apps, remote processes and more. To verify it's installed, open the Extensions view (X (Windows, Linux Ctrl+Shift+X)) and search for @installed python debugger. You should see the Python Debugger extension listed in the results. You can refer to the extension's README page for information on supported Python versions. A configuration drives VS Code's behavior during a debugging session. Configurations are defined in a launch.json file that's stored in a .vscode folder in your workspace. Note: To change debugging configuration, your code must be stored in a folder. To initialize debug configurations, first select the Run view in the sidebar: If you don't yet have any configurations defined, you'll see a button to Run and Debug and a link to create a configuration (launch.json) file: To generate a launch.json file with Python configurations, do the following steps: Select the create a launch.json file link (outlined in the image above) or use the Run > Open configurations menu command. Select Python Debugger from the debugger options list. A configuration menu will open from the Command Palette allowing you to choose the type of debug configuration you want to use for our Python project file. If you want to debug a single Python script, select Python File in the Select a debug configuration menu that appears. Note: Starting a debugging session through the Debug Panel, F5 or Run > Start Debugging when no configuration exists will also bring up the debug configuration menu, but will not create a launch.json file. The Python Debugger extension then creates and opens a launch.json file that contains a pre-defined configuration based on what you previously selected, in this case, Python File. You can modify configurations (to add arguments, for example), and also add custom configurations. The details of configuration properties are covered later in this article under Standard configuration and options. Other configurations are also described in this article under Debugging specific app types. By default, VS Code shows only the most common configurations provided by the Python Debugger extension. You can select other configurations to include in launch.json by using the Add Configuration command shown in the list and the launch.json editor. When you use the command, VS Code prompts you with a list of all available configurations (be sure to select the Python option): Selecting the Attach using Process ID one yields the following result: See Debugging specific app types for details on all of these configurations. During debugging, the Status Bar shows the current configuration and the current debugging interpreter. Selecting the configuration brings up a list from which you can choose a different configuration: By default, the debugger uses the same interpreter selected for your workspace, just like other features of Python extension for VS Code. To use a different interpreter for debugging specifically, set the value for python in" }, { "data": "for the applicable debugger configuration. Alternately, use the Python interpreter indicator on the Status Bar to select a different one. If you're only interested in debugging a Python script, the simplest way is to select the down-arrow next to the run button on the editor and select Python Debugger: Debug Python File. If you're looking to debug a web application using Flask, Django or FastAPI, the Python Debugger extension provides dynamically created debug configurations based on your project structure under the Show all automatic debug configurations option, through the Run and Debug view. But if you're looking to debug other kinds of applications, you can start the debugger through the Run view by clicking on the Run and Debug button. When no configuration has been set, you'll be given a list of debugging options. Here, you can select the appropriate option to quickly debug your code. Two common options are to use the Python File configuration to run the currently open Python file or to use the Attach using Process ID configuration to attach the debugger to a process that is already running. For information about creating and using debugging configurations, see the Initialize configurations and Additional configurations sections. Once a configuration is added, it can be selected from the dropdown list and started using the Start Debugging button (F5). The debugger can also be run from the command line, if debugpy is installed in your Python environment. You can install debugpy using python -m pip install --upgrade debugpy into your Python environment. Tip: While using a virtual environment is not required, it is a recommended best practice. You can create a virtual environment in VS Code by opening the Command Palette (P (Windows, Linux Ctrl+Shift+P)) and running the Python: Create Virtual Environment command (). The debugger command line syntax is as follows: ``` python -m debugpy --listen | --connect [<host>:]<port> [--wait-for-client] [--configure-<name> <value>]... [--log-to <path>] [--log-to-stderr] <filename> | -m <module> | -c <code> | --pid <pid> [<arg>]... ``` From the command line, you could start the debugger using a specified port (5678) and script using the following syntax. This example assumes the script is long-running and omits the --wait-for-client flag, meaning that the script will not wait for the client to attach. ``` python -m debugpy --listen 5678 ./myscript.py ``` You would then use the following configuration to attach from the VS Code Python Debugger extension. ``` { \"name\": \"Python Debugger: Attach\", \"type\": \"debugpy\", \"request\": \"attach\", \"connect\": { \"host\": \"localhost\", \"port\": 5678 } } ``` Note: Specifying host is optional for listen, by default 127.0.0.1 is used. If you wanted to debug remote code or code running in a docker container, on the remote machine or container, you would need to modify the previous CLI command to specify a host. ``` python -m debugpy --listen 0.0.0.0:5678 ./myscript.py ``` The associated configuration file would then look as follows. ``` { \"name\": \"Attach\", \"type\": \"debugpy\", \"request\": \"attach\", \"connect\": { \"host\": \"remote-machine-name\", // replace this with remote machine name \"port\": 5678 } } ``` Note: Be aware that when you specify a host value other than 127.0.0.1 or localhost you are opening a port to allow access from any machine, which carries security risks. You should make sure that you're taking appropriate security precautions, such as using SSH tunnels, when doing remote debugging. | Flag | Options | Description | |:-|:-|:-| | --listen or --connect | [<host>:]<port> | Required. Specifies the host address and port for the debug adapter server to wait for incoming connections (--listen) or to connect with a client that is waiting for an incoming connection" }, { "data": "This is the same address that is used in the VS Code debug configuration. By default, the host address is localhost (127.0.0.1). | | --wait-for-client | none | Optional. Specifies that the code should not run until there's a connection from the debug server. This setting allows you to debug from the first line of your code. | | --log-to | <path> | Optional. Specifies a path to an existing directory for saving logs. | | --log-to-stderr | none | Optional. Enables debugpy to write logs directly to stderr. | | --pid | <pid> | Optional. Specifies a process that is already running to inject the debug server into. | | --configure-<name> | <value> | Optional. Sets a debug property that must be known to the debug server before the client connects. Such properties can be used directly in launch configuration, but must be set in this manner for attach configurations. For example, if you don't want the debug server to automatically inject itself into subprocesses created by the process you're attaching to, use --configure-subProcess false. | Note: [<arg>] can be used to pass command-line arguments along to the app being launched. There may be instances where you need to debug a Python script that's invoked locally by another process. For example, you may be debugging a web server that runs different Python scripts for specific processing jobs. In such cases, you need to attach the VS Code debugger to the script once it's been launched: Run VS Code, open the folder or workspace containing the script, and create a launch.json for that workspace if one doesn't exist already. In the script code, add the following and save the file: ``` import debugpy debugpy.listen(5678) print(\"Waiting for debugger attach\") debugpy.waitforclient() debugpy.breakpoint() print('break on this line') ``` Open a terminal using Terminal: Create New Terminal, which activates the script's selected environment. In the terminal, install the debugpy package. In the terminal, start Python with the script, for example, python3 myscript.py. You should see the \"Waiting for debugger attach\" message that's included in the code, and the script halts at the debugpy.waitforclient() call. Switch to the Run and Debug view (D (Windows, Linux Ctrl+Shift+D)), select the appropriate configuration from the debugger dropdown list, and start the debugger. The debugger should stop on the debugpy.breakpoint() call, from which point you can use the debugger normally. You also have the option of setting other breakpoints in the script code using the UI instead of using debugpy.breakpoint(). Remote debugging allows you to step through a program locally within VS Code while it runs on a remote computer. It is not necessary to install VS Code on the remote computer. For added security, you may want or need to use a secure connection, such as SSH, to the remote computer when debugging. Note: On Windows computers, you may need to install Windows 10 OpenSSH to have the ssh command. The following steps outline the general process to set up an SSH tunnel. An SSH tunnel allows you to work on your local machine as if you were working directly on the remote in a more secure manner than if a port was opened for public access. On the remote computer: Enable port forwarding by opening the sshd_config config file (found under /etc/ssh/ on Linux and under %programfiles(x86)%/openssh/etc on Windows) and adding or modifying the following setting: ``` AllowTcpForwarding yes ``` Note: The default for AllowTcpForwarding is yes, so you might not need to make a change. If you had to add or modify AllowTcpForwarding, restart the SSH server. On Linux/macOS, run sudo service ssh restart; on Windows, run" }, { "data": "select OpenSSH or sshd in the list of services, and select Restart. On the local computer: Create an SSH tunnel by running ssh -2 -L sourceport:localhost:destinationport -i identityfile user@remoteaddress, using a selected port for destinationport and the appropriate username and the remote computer's IP address in user@remoteaddress. For example, to use port 5678 on IP address 1.2.3.4, the command would be ssh -2 -L 5678:localhost:5678 -i identityfile user@1.2.3.4. You can specify the path to an identity file, using the -i flag. Verify that you can see a prompt in the SSH session. In your VS Code workspace, create a configuration for remote debugging in your launch.json file, setting the port to match the port used in the ssh command and the host to localhost. You use localhost here because you've set up the SSH tunnel. ``` { \"name\": \"Python Debugger: Attach\", \"type\": \"debugpy\", \"request\": \"attach\", \"port\": 5678, \"host\": \"localhost\", \"pathMappings\": [ { \"localRoot\": \"${workspaceFolder}\", // Maps C:\\Users\\user1\\project1 \"remoteRoot\": \".\" // To current working directory ~/project1 } ] } ``` Starting debugging Now that an SSH tunnel has been set up to the remote computer, you can begin your debugging. Both computers: make sure that identical source code is available. Both computers: install debugpy. Remote computer: there are two ways to specify how to attach to the remote process. In the source code, add the following lines, replacing address with the remote computer's IP address and port number (IP address 1.2.3.4 is shown here for illustration only). ``` import debugpy debugpy.listen(('1.2.3.4', 5678)) debugpy.waitforclient() ``` The IP address used in listen should be the remote computer's private IP address. You can then launch the program normally, causing it to pause until the debugger attaches. Launch the remote process through debugpy, for example: ``` python3 -m debugpy --listen 1.2.3.4:5678 --wait-for-client -m myproject ``` This starts the package myproject using python3, with the remote computer's private IP address of 1.2.3.4 and listening on port 5678 (you can also start the remote Python process by specifying a file path instead of using -m, such as ./hello.py). Local computer: Only if you modified the source code on the remote computer as outlined above, then in the source code, add a commented-out copy of the same code added on the remote computer. Adding these lines makes sure that the source code on both computers matches line by line. ``` ``` Local computer: switch to the Run and Debug view (D (Windows, Linux Ctrl+Shift+D)) in VS Code, select the Python Debugger: Attach configuration Local computer: set a breakpoint in the code where you want to start debugging. Local computer: start the VS Code debugger using the modified Python Debugger: Attach configuration and the Start Debugging button. VS Code should stop on your locally set breakpoints, allowing you to step through the code, examine variables, and perform all other debugging actions. Expressions that you enter in the Debug Console are run on the remote computer as well. Text output to stdout, as from print statements, appears on both computers. Other outputs, such as graphical plots from a package like matplotlib, however, appear only on the remote computer. During remote debugging, the debugging toolbar appears as below: On this toolbar, the disconnect button (F5 (Windows, Linux Shift+F5)) stops the debugger and allows the remote program to run to completion. The restart button (F5 (Windows, Linux Ctrl+Shift+F5)) restarts the debugger on the local computer but does not restart the remote program. Use the restart button only when you've already restarted the remote program and need to reattach the debugger. When you first create" }, { "data": "there are two standard configurations that run the active file in the editor in either the integrated terminal (inside VS Code) or the external terminal (outside of VS Code): ``` { \"configurations\": [ { \"name\": \"Python Debugger: Current File (Integrated Terminal)\", \"type\": \"debugpy\", \"request\": \"launch\", \"program\": \"${file}\", \"console\": \"integratedTerminal\" }, { \"name\": \"Python Debugger: Current File (External Terminal)\", \"type\": \"debugpy\", \"request\": \"launch\", \"program\": \"${file}\", \"console\": \"externalTerminal\" } ] } ``` The specific settings are described in the following sections. You can also add other settings, such as args, that aren't included in the standard configurations. Tip: It's often helpful in a project to create a configuration that runs a specific startup file. For example, if you want to always launch startup.py with the arguments --port 1593 when you start the debugger, create a configuration entry as follows: ``` { \"name\": \"Python Debugger: startup.py\", \"type\": \"debugpy\", \"request\": \"launch\", \"program\": \"${workspaceFolder}/startup.py\", \"args\" : [\"--port\", \"1593\"] }, ``` Provides the name for the debug configuration that appears in the VS Code dropdown list. Identifies the type of debugger to use; leave this set to debugpy for debugging Python code. Specifies the mode in which to start debugging: Provides the fully qualified path to the python program's entry module (startup file). The value ${file}, often used in default configurations, uses the currently active file in the editor. By specifying a specific startup file, you can always be sure of launching your program with the same entry point regardless of which files are open. For example: ``` \"program\": \"/Users/Me/Projects/MyProject/src/eventhandlers/init_.py\", ``` You can also rely on a relative path from the workspace root. For example, if the root is /Users/Me/Projects/MyProject then you can use the following example: ``` \"program\": \"${workspaceFolder}/src/eventhandlers/init_.py\", ``` Provides the ability to specify the name of a module to be debugged, similarly to the -m argument when run at the command line. For more information, see Python.org The full path that points to the Python interpreter to be used for debugging. If not specified, this setting defaults to the interpreter selected for your workspace, which is equivalent to using the value ${command:python.interpreterPath}. To use a different interpreter, specify its path instead in the python property of a debug configuration. Alternately, you can use a custom environment variable that's defined on each platform to contain the full path to the Python interpreter to use, so that no other folder paths are needed. If you need to pass arguments to the Python interpreter, you can use the pythonArgs property. Specifies arguments to pass to the Python interpreter using the syntax \"pythonArgs\": [\"<arg 1>\", \"<arg 2>\",...]. Specifies arguments to pass to the Python program. Each element of the argument string that's separated by a space should be contained within quotes, for example: ``` \"args\": [\"--quiet\", \"--norepeat\", \"--port\", \"1593\"], ``` If you want to provide different arguments per debug run, you can set args to ${command:pickArgs}. This will prompt you to enter arguments each time you start a debug session. When set to true, breaks the debugger at the first line of the program being debugged. If omitted (the default) or set to false, the debugger runs the program to the first breakpoint. Specifies how program output is displayed as long as the defaults for redirectOutput aren't modified. | Value | Where output is displayed | |:-|:--| | \"internalConsole\" | VS Code debug console. If redirectOutput is set to False, no output is displayed. | | \"integratedTerminal\" (default) | VS Code Integrated Terminal. If redirectOutput is set to True, output is also displayed in the debug console. | | \"externalTerminal\" | Separate console" }, { "data": "If redirectOutput is set to True, output is also displayed in the debug console. | There is more than one way to configure the Run button, using the purpose option. Setting the option to debug-test, defines that the configuration should be used when debugging tests in VS Code. However, setting the option to debug-in-terminal, defines that the configuration should only be used when accessing the Run Python File button on the top-right of the editor (regardless of whether the Run Python File or Debug Python File options the button provides is used). Note: The purpose option can't be used to start the debugger through F5 or Run > Start Debugging. Allows for the automatic reload of the debugger when changes are made to code after the debugger execution has hit a breakpoint. To enable this feature set {\"enable\": true} as shown in the following code. ``` { \"name\": \"Python Debugger: Current File\", \"type\": \"debugpy\", \"request\": \"launch\", \"program\": \"${file}\", \"console\": \"integratedTerminal\", \"autoReload\": { \"enable\": true } } ``` Note: When the debugger performs a reload, code that runs on import might be executed again. To avoid this situation, try to only use imports, constants, and definitions in your module, placing all code into functions. Alternatively, you can also use if name==\"main\" checks. Specifies whether to enable subprocess debugging. Defaults to false, set to true to enable. For more information, see multi-target debugging. Specifies the current working directory for the debugger, which is the base folder for any relative paths used in code. If omitted, defaults to ${workspaceFolder} (the folder open in VS Code). As an example, say ${workspaceFolder} contains a pycode folder containing app.py, and a data folder containing salaries.csv. If you start the debugger on pycode/app.py, then the relative paths to the data file vary depending on the value of cwd: | cwd | Relative path to data file | |:|:--| | Omitted or ${workspaceFolder} | data/salaries.csv | | ${workspaceFolder}/py_code | ../data/salaries.csv | | ${workspaceFolder}/data | salaries.csv | When set to true (the default for internalConsole), causes the debugger to print all output from the program into the VS Code debug output window. If set to false (the default for integratedTerminal and externalTerminal), program output is not displayed in the debugger output window. This option is typically disabled when using \"console\": \"integratedTerminal\" or \"console\": \"externalTerminal\" because there's no need to duplicate the output in the debug console. When omitted or set to true (the default), restricts debugging to user-written code only. Set to false to also enable debugging of standard library functions. When set to true, activates debugging features specific to the Django web framework. When set to true and used with \"console\": \"externalTerminal\", allows for debugging apps that require elevation. Using an external console is necessary to capture the password. When set to true, ensures that a Pyramid app is launched with the necessary pserve command. Sets optional environment variables for the debugger process beyond system environment variables, which the debugger always inherits. The values for these variables must be entered as strings. Optional path to a file that contains environment variable definitions. See Configuring Python environments - environment variable definitions file. If set to true, enables debugging of gevent monkey-patched code. When set to true, activates debugging features specific to the Jinja templating framework. The Python Debugger extension supports breakpoints and logpoints for debugging code. For a short walkthrough of basic debugging and using breakpoints, see Tutorial - Configure and run the debugger. Breakpoints can also be set to trigger based on expressions, hit counts, or a combination of" }, { "data": "The Python Debugger extension supports hit counts that are integers, in addition to integers preceded by the ==, >, >=, <, <=, and % operators. For example, you could set a breakpoint to trigger after five occurrences by setting a hit count of >5 For more information, see conditional breakpoints in the main VS Code debugging article. In your Python code, you can call debugpy.breakpoint() at any point where you want to pause the debugger during a debugging session. The Python Debugger extension automatically detects breakpoints that are set on non-executable lines, such as pass statements or the middle of a multiline statement. In such cases, running the debugger moves the breakpoint to the nearest valid line to ensure that code execution stops at that point. The configuration dropdown provides various different options for general app types: | Configuration | Description | |:-|:--| | Attach | See Remote debugging in the previous section. | | Django | Specifies \"program\": \"${workspaceFolder}/manage.py\", \"args\": [\"runserver\"]. Also adds \"django\": true to enable debugging of Django HTML templates. | | Flask | See Flask debugging below. | | Gevent | Adds \"gevent\": true to the standard integrated terminal configuration. | | Pyramid | Removes program, adds \"args\": [\"${workspaceFolder}/development.ini\"], adds \"jinja\": true for enabling template debugging, and adds \"pyramid\": true to ensure that the program is launched with the necessary pserve command. | Specific steps are also needed for remote debugging and Google App Engine. For details on debugging tests, see Testing. To debug an app that requires administrator privileges, use \"console\": \"externalTerminal\" and \"sudo\": \"True\". ``` { \"name\": \"Python Debugger: Flask\", \"type\": \"debugpy\", \"request\": \"launch\", \"module\": \"flask\", \"env\": { \"FLASK_APP\": \"app.py\" }, \"args\": [ \"run\", \"--no-debugger\" ], \"jinja\": true }, ``` As you can see, this configuration specifies \"env\": {\"FLASKAPP\": \"app.py\"} and \"args\": [\"run\", \"--no-debugger\"]. The \"module\": \"flask\" property is used instead of program. (You may see \"FLASKAPP\": \"${workspaceFolder}/app.py\" in the env property, in which case modify the configuration to refer to only the filename. Otherwise, you may see \"Cannot import module C\" errors where C is a drive letter.) The \"jinja\": true setting also enables debugging for Flask's default Jinja templating engine. If you want to run Flask's development server in development mode, use the following configuration: ``` { \"name\": \"Python Debugger: Flask (development mode)\", \"type\": \"debugpy\", \"request\": \"launch\", \"module\": \"flask\", \"env\": { \"FLASK_APP\": \"app.py\", \"FLASK_ENV\": \"development\" }, \"args\": [ \"run\" ], \"jinja\": true }, ``` There are many reasons why the debugger may not work. Sometimes the debug console reveals specific causes, but the main reasons are as follows: Make sure the Python Debugger extension is installed and enabled in VS Code by opening the Extensions view (X (Windows, Linux Ctrl+Shift+X)) and searching for @installed python debugger. The path to the python executable is incorrect: check the path of your selected interpreter by running the Python: Select Interpreter command and looking at the current value: You have \"type\" set to the deprecated value \"python\" in your launch.json file: replace \"python\" with \"debugpy\" instead to work with the Python Debugger extension. There are invalid expressions in the watch window: clear all expressions from the Watch window and restart the debugger. If you're working with a multi-threaded app that uses native thread APIs (such as the Win32 CreateThread function rather than the Python threading APIs), it's presently necessary to include the following source code at the top of whichever file you want to debug: ``` import debugpy debugpy.debugthisthread() ``` If you are working with a Linux system, you may receive a \"timed out\" error message when trying to apply a debugger to any running process." } ]
{ "category": "App Definition and Development", "file_name": "jdwp-spec.html.md", "project_name": "Squash", "subcategory": "Application Definition & Image Build" }
[ { "data": "Protocol details The JavaTM Debug Wire Protocol (JDWP) is the protocol used for communication between a debugger and the Java virtual machine (VM) which it debugs (hereafter called the target VM). JDWP is optional; it might not be available in some implementations of the Java(TM) 2 SDK. The existence of JDWP can allow the same debugger to work The JDWP differs from many protocol specifications in that it only details format and layout, not transport. A JDWP implementation can be designed to accept different transport mechanisms through a simple API. A particular transport does not necessarily support each of the debugger/target VM combinations listed above. The JDWP is designed to be simple enough for easy implementation, yet it is flexible enough for future growth. Currently, the JDWP does not specify any mechanism for transport rendezvous or any directory services. This may be changed in the future, but it may be addressed in a separate document. JDWP is one layer within the Java Platform Debugger Architecture (JPDA). This architecture also contains the higher-level Java Debug Interface (JDI). The JDWP is designed to facilitate efficient use by the JDI; many of its abilities are tailored to that end. The JDI is more appropriate than JDWP for many debugger tools, particularly those written in the Java programming language. For more information on the Java Platform Debugger Architecture, see the Java Platform Debugger Architecture documentation for this release. After the transport connection is established and before any packets are sent, a handshake occurs between the two sides of the connection: The handshake process has the following steps: The JDWP is packet based and is not stateful. There are two basic packet types: command packets and reply packets. Command packets may be sent by either the debugger or the target VM. They are used by the debugger to request information from the target VM, or to control program execution. Command packets are sent by the target VM to notify the debugger of some event in the target VM such as a breakpoint or exception. A reply packet is sent only in response to a command packet and always provides information success or failure of the command. Reply packets may also carry data requested in the command (for example, the value of a field or variable). Currently, events sent from the target VM do not require a response packet from the debugger. The JDWP is asynchronous; multiple command packets may be sent before the first reply packet is received. Command and reply packet headers are equal in size; this is to make transports easier to implement and abstract. The layout of each packet looks like this: All fields and data sent via JDWP should be in big-endian format. (See the Java Virtual Machine Specification for the definition of big-endian.) The first three fields are identical in both packet types. A simple monotonic counter should be adequate for most" }, { "data": "It will allow 2^32 unique outstanding packets and is the simplest implementation alternative. The reply bit, when set, indicates that this packet is a reply. The command set space is roughly divided as follows: In general, the data field of a command or reply packet is an abstraction of a group of multiple fields that define the command or reply data. Each subfield of a data field is encoded in big endian (Java) format. The detailed composition of data fields for each command and its reply are described in this section. There is a small set of common data types that are common to many of the different JDWP commands and replies. They are described below. | Name | Size | Description | |:-|:-|:| | byte | 1 byte | A byte value. | | boolean | 1 byte | A boolean value, encoded as 0 for false and non-zero for true. | | int | 4 bytes | An four-byte integer value. The integer is signed unless explicitly stated to be unsigned. | | long | 8 bytes | An eight-byte integer value. The value is signed unless explicitly stated to be unsigned. | | objectID | Target VM-specific, up to 8 bytes (see below) | Uniquely identifies an object in the target VM. A particular object will be identified by exactly one objectID in JDWP commands and replies throughout its lifetime (or until the objectID is explicitly disposed). An ObjectID is not reused to identify a different object unless it has been explicitly disposed, regardless of whether the referenced object has been garbage collected. An objectID of 0 represents a null object. Note that the existence of an object ID does not prevent the garbage collection of the object. Any attempt to access a a garbage collected object with its object ID will result in the INVALID_OBJECT error code. Garbage collection can be disabled with the DisableCollection command, but it is not usually necessary to do so. | | tagged-objectID | size of objectID plus one byte | The first byte is a signature byte which is used to identify the object's type. See JDWP.Tag for the possible values of this byte (note that only object tags, not primitive tags, may be used). It is followed immediately by the objectID itself. | | threadID | same as objectID | Uniquely identifies an object in the target VM that is known to be a thread | | threadGroupID | same as objectID | Uniquely identifies an object in the target VM that is known to be a thread group | | stringID | same as objectID | Uniquely identifies an object in the target VM that is known to be a string object. Note: this is very different from string, which is a" }, { "data": "| | classLoaderID | same as objectID | Uniquely identifies an object in the target VM that is known to be a class loader object | | classObjectID | same as objectID | Uniquely identifies an object in the target VM that is known to be a class object. | | arrayID | same as objectID | Uniquely identifies an object in the target VM that is known to be an array. | | referenceTypeID | same as objectID | Uniquely identifies a reference type in the target VM. It should not be assumed that for a particular class, the classObjectID and the referenceTypeID are the same. A particular reference type will be identified by exactly one ID in JDWP commands and replies throughout its lifetime A referenceTypeID is not reused to identify a different reference type, regardless of whether the referenced class has been unloaded. | | classID | same as referenceTypeID | Uniquely identifies a reference type in the target VM that is known to be a class type. | | interfaceID | same as referenceTypeID | Uniquely identifies a reference type in the target VM that is known to be an interface type. | | arrayTypeID | same as referenceTypeID | Uniquely identifies a reference type in the target VM that is known to be an array type. | | methodID | Target VM-specific, up to 8 bytes (see below) | Uniquely identifies a method in some class in the target VM. The methodID must uniquely identify the method within its class/interface or any of its subclasses/subinterfaces/implementors. A methodID is not necessarily unique on its own; it is always paired with a referenceTypeID to uniquely identify one method. The referenceTypeID can identify either the declaring type of the method or a subtype. | | fieldID | Target VM-specific, up to 8 bytes (see below) | Uniquely identifies a field in some class in the target VM. The fieldID must uniquely identify the field within its class/interface or any of its subclasses/subinterfaces/implementors. A fieldID is not necessarily unique on its own; it is always paired with a referenceTypeID to uniquely identify one field. The referenceTypeID can identify either the declaring type of the field or a subtype. | | frameID | Target VM-specific, up to 8 bytes (see below) | Uniquely identifies a frame in the target VM. The frameID must uniquely identify the frame within the entire VM (not only within a given thread). The frameID need only be valid during the time its thread is suspended. | | location | Target VM specific | An executable location. The location is identified by one byte type tag followed by a a classID followed by a methodID followed by an unsigned eight-byte index, which identifies the location within the method. Index values are restricted as follows: The index of the start location for the method is less than all other locations in the method. The index of the end location for the method is greater than all other locations in the" }, { "data": "If a line number table exists for a method, locations that belong to a particular line must fall between the line's location index and the location index of the next line in the table. Index values within a method are monotonically increasing from the first executable point in the method to the last. For many implementations, each byte-code instruction in the method has its own index, but this is not required. The type tag is necessary to identify whether location's classID identifies a class or an interface. Almost all locations are within classes, but it is possible to have executable code in the static initializer of an interface. | | string | Variable | A UTF-8 encoded string, not zero terminated, preceded by a four-byte integer length. | | value | Variable | A value retrieved from the target VM. The first byte is a signature byte which is used to identify the type. See JDWP.Tag for the possible values of this byte. It is followed immediately by the value itself. This value can be an objectID (see Get ID Sizes) or a primitive value (1 to 8 bytes). More details about each value type can be found in the next table. | | untagged-value | Variable | A value as described above without the signature byte. This form is used when the signature information can be determined from context. | | arrayregion | Variable | A compact representation of values used with some array operations. The first byte is a signature byte which is used to identify the type. See JDWP.Tag for the possible values of this byte. Next is a four-byte integer indicating the number of values in the sequence. This is followed by the values themselves: Primitive values are encoded as a sequence of untagged-values; Object values are encoded as a sequence of values. | Note that the existence of an object ID does not prevent the garbage collection of the object. Any attempt to access a a garbage collected object with its object ID will result in the INVALID_OBJECT error code. Garbage collection can be disabled with the DisableCollection command, but it is not usually necessary to do so. The type tag is necessary to identify whether location's classID identifies a class or an interface. Almost all locations are within classes, but it is possible to have executable code in the static initializer of an interface. Object ids, reference type ids, field ids, method ids, and frame ids may be sized differently in different target VM implementations. Typically, their sizes correspond to size of the native identifiers used for these items in JNI and JVMDI calls. The maximum size of any of these types is 8 bytes. The \"idSizes\" command in the VirtualMachine command set is used by the debugger to determine the size of each of these types. If a debuggee receives a Command Packet with a non-implemented or non-recognized command set or command then it returns a Reply Packet with the error code field set to NOT_IMPLEMENTED (see Error Constants). Protocol details" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Tilt", "subcategory": "Application Definition & Image Build" }
[ { "data": "Kubernetes for Prod, Tilt for Dev Modern apps are made of too many services. Theyre everywhere and in constant communication. Tilt powers microservice development and makes sure they behave! Run tilt up to work in a complete dev environment configured for your team. Tilt automates all the steps from a code change to a new process: watching files, building container images, and bringing your environment up-to-date. Think docker build && kubectl apply or docker-compose up. Installing the tilt binary is a one-step command. ``` curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash ``` ``` iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.ps1')) ``` Completely new to Tilt? Watch our two minute explanation video and browse through FAQs about Tilt. Then head over to our tutorial to run Tilt yourself for the very first time! Setting up Tilt for existing services? We have best practice guides for: Optimizing your Tiltfile? Search for the function you need in our API reference. Questions: Join the Kubernetes slack and find us in the #tilt channel. Or file an issue. For code snippets of Tiltfile functionality shared by the Tilt community, check out Tilt Extensions. Contribute: Check out our guidelines to contribute to Tilts source code. To extend the capabilities of Tilt via new Tiltfile functionality, read more about Extensions. Follow along: @tilt_dev on Twitter. For updates and announcements, follow the blog or subscribe to the newsletter. Help us make Tilt even better: Tilt sends anonymized usage data, so we can improve Tilt on every platform. Details in What does Tilt send?. If you find a security issue in Tilt, see our security policy. We expect everyone in our community (users, contributors, followers, and employees alike) to abide by our Code of Conduct. Built with from New York, Philadelphia, Boston, Minneapolis, and the" } ]
{ "category": "App Definition and Development", "file_name": "install.html.md", "project_name": "Tilt", "subcategory": "Application Definition & Image Build" }
[ { "data": "Tilt is available for macOS, Linux, and Windows. Youll also need: Docker for Mac contains Docker, kubectl, and a Kubernetes cluster. ``` kubectl config use-context docker-desktop ``` ``` curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash ``` ``` curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash ``` Alternatively, Ubuntu users sometimes prefer Microk8s instead of Kind because it integrates well with Ubuntu. See the Choosing a Local Dev Cluster guide for more Linux options. Docker for Windows contains Docker, kubectl, and a Kubernetes cluster. ``` kubectl config use-context docker-desktop ``` ``` iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.ps1')) ``` If you have Scoop installed, the installer will use that to make Tilt easy to access and upgrade. Otherwise, you will need to add the tilt install directory on your $PATH (or create an alias) to make it easier to access. After you install Tilt, verify that you installed it correctly with: ``` tilt version ``` If you have any trouble installing Tilt, look for the error message in the Troubleshooting FAQ. Youre ready to start using Tilt! Try our Tutorial to learn about Tilt or jump right in with the Write a Tiltfile Guide. We offer 1-step installation scripts that will install the most recent version of Tilt. The installer first checks if you can install Tilt with a package manager, like Homebrew or Scoop. You can also use these installers directly. ``` brew install tilt-dev/tap/tilt ``` ``` scoop bucket add tilt-dev https://github.com/tilt-dev/scoop-bucket scoop install tilt ``` ``` conda config --add channels conda-forge conda install tilt ``` ``` asdf plugin add tilt asdf install tilt 0.33.16 asdf global tilt 0.33.16 ``` If you dont have a package manager installed, the installer will download a tilt static binary for you and put it in a reasonable place. (~/.local/bin, /usr/local/bin, or ~/bin depending on your OS and whats already on your $PATH. See Tilts GitHub Releases page for specific versions. If youd prefer to download the binary manually: On macOS: ``` curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v0.33.16/tilt.0.33.16.mac.x86_64.tar.gz | tar -xzv tilt && \\ sudo mv tilt /usr/local/bin/tilt ``` On Linux: ``` curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v0.33.16/tilt.0.33.16.linux.x86_64.tar.gz | tar -xzv tilt && \\ sudo mv tilt /usr/local/bin/tilt ``` On Windows: ``` Invoke-WebRequest \"https://github.com/tilt-dev/tilt/releases/download/v0.33.16/tilt.0.33.16.windows.x86_64.zip\" -OutFile \"tilt.zip\" Expand-Archive \"tilt.zip\" -DestinationPath \"tilt\" Move-Item -Force -Path \"tilt\\tilt.exe\" -Destination \"$home\\bin\\tilt.exe\" ``` Finally, if you want to install tilt from source, see the developers guide. Building from source requires both Go and TypeScript/JavaScript tools, and dynamically compiles the TypeScript on every run. We only recommend this if you want to make changes to Tilt. Built with from New York, Philadelphia, Boston, Minneapolis, and the" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Telepresence", "subcategory": "Application Definition & Image Build" }
[ { "data": "Products Built on Envoy Proxy BY USE CASE BY INDUSTRY BY ROLE LEARN LISTEN ACT Company Docs DocsTelepresence OSSTelepresence Architecture The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons and then acts as a user-friendly interface to the Telepresence User Daemon. Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's network in order to communicate with the cluster and handle intercepted traffic. The User-Daemon coordinates the creation and deletion of intercepts by communicating with the Traffic Manager. All requests from and to the cluster go through this Daemon. The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a Virtual Network Device (VIF). For a detailed description of how the VIF manages traffic and why it is necessary please refer to this blog post: Implementing Telepresence Networking with a TUN Device. The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all relevant inbound and outbound traffic, and tracking active intercepts. The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview URL, it forwards the request to the ingress service specified at the Preview URL creation. The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent container is injected into the workload's pod(s). You can see the Traffic Agent's status by running telepresence list or kubectl describe pod <pod-name>. Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the pod usually handling requests on that port. ON THIS PAGE Were here to help if you have questions." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Akuity", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "For access to the Akuity Platform please create an Akuity user account. After completing the registration process, activate your account using the link sent to your email. After registering and activating your account, if this is your first time using the platform, you must also create an organization: Click the create or join link. Click + New organization in the upper right hand corner of the dashboard. Name your organization following the rules listed below the Organization Name field. Navigate to Argo CD. Click + Create in the upper right hand corner of the dashboard. Name your instance following the rules listed below the Instance Name field. (Optionally) Choose the Argo CD version you want to use. Click + Create. It will take several seconds to create your new Argo CD instance (Progressing status next to your new instance's name), so please be patient. Continue on to the next section to learn about connecting your new Argo CD instance to a running Kubernetes cluster." } ]
{ "category": "App Definition and Development", "file_name": "faq#_how-to-disable-telemetry-reporting.md", "project_name": "Visual Studio Code Kubernetes Tools", "subcategory": "Application Definition & Image Build" }
[ { "data": "Version 1.90 is now available! Read about the new features and fixes from May. Our docs contain a Common questions section as needed for specific topics. We've captured items here that don't fit in the other topics. If you don't see an answer to your question here, check our previously reported issues on GitHub and our release notes. Visual Studio Code is a streamlined code editor with support for development operations like debugging, task running, and version control. It aims to provide just the tools a developer needs for a quick code-build-debug cycle and leaves more complex workflows to fuller featured IDEs, such as Visual Studio IDE. VS Code runs on macOS, Linux, and Windows. See the Requirements documentation for the supported versions. You can find more platform specific details in the Setup overview. Yes, VS Code is free for private or commercial use. See the product license for details. VS Code collects usage data and sends it to Microsoft to help improve our products and services. Read our privacy statement and telemetry documentation to learn more. If you don't want to send usage data to Microsoft, you can set the telemetry.telemetryLevel user setting to off. From File > Preferences > Settings, search for telemetry, and set the Telemetry: Telemetry Level setting to off. This will silence all telemetry events from VS Code going forward. Important Notice: VS Code gives you the option to install Microsoft and third party extensions. These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting. Consult the specific extension's documentation to learn about its telemetry reporting. VS Code uses experiments to try out new features or progressively roll them out. Our experimentation framework calls out to a Microsoft-owned service and is therefore disabled when telemetry is disabled. However, if you want to disable experiments regardless of your telemetry preferences, you may set the workbench.enableExperiments user setting to false. From File > Preferences > Settings, search for experiments, and uncheck the Workbench: Enable Experiments setting. This will prevent VS Code from calling out to the service and opt out of any ongoing experiments. VS Code collects data about any crashes that occur and sends it to Microsoft to help improve our products and services. Read our privacy statement and telemetry documentation to learn more. If you don't want to send crash data to Microsoft, you can change the telemetry.telemetryLevel user setting to off. From File > Preferences > Settings, search for telemetry, and set the Telemetry: Telemetry Level setting to off. This will silence all telemetry events including crash reporting from VS Code. You will need to restart VS Code for the setting change to take effect. Now that the General Data Protection Regulation (GDPR) is in effect, we want to take this opportunity to reiterate that we take privacy very seriously. That's both for Microsoft as a company and specifically within the VS Code team. To support GDPR: You can learn more about VS Code's GDPR compliance in the telemetry documentation. Beyond crash reporting and telemetry, VS Code uses online services for various other purposes such as downloading product updates, finding, installing, and updating extensions, or providing Natural Language Search within the Settings editor. You can learn more in Managing online services. You can choose to turn on/off features that use these services. From File > Preferences > Settings, and type the tag @tag:usesOnlineServices. This will display all settings that control the usage of online services and you can individually switch them on or" }, { "data": "By default, VS Code is set up to auto-update for macOS and Windows users when we release new updates. If you do not want to get automatic updates, you can set the Update: Mode setting from default to none. To modify the update mode, go to File > Preferences > Settings, search for update mode and change the setting to none. If you use the JSON editor for your settings, add the following line: ``` \"update.mode\": \"none\" ``` You can install a previous release of VS Code by uninstalling your current version and then installing the download provided at the top of a specific release notes page. Note: On Linux: If the VS Code repository was installed correctly then your system package manager should handle auto-updating in the same way as other packages on the system. See Installing VS Code on Linux. By default, VS Code will also auto-update extensions as new versions become available. If you do not want extensions to automatically update, you can clear the Extensions: Auto Update check box in the Settings editor (, (Windows, Linux Ctrl+,)). If you use the JSON editor to modify your settings, add the following line: ``` \"extensions.autoUpdate\": false ``` You can find the VS Code licenses, third party notices and Chromium Open Source credit list under your VS Code installation location resources\\app folder. VS Code's ThirdPartyNotices.txt, Chromium's Credits_*.html, and VS Code's English language LICENSE.txt are available under resources\\app. Localized versions of LICENSE.txt by language ID are under resources\\app\\licenses. To learn why Visual Studio Code, the product, has a different license than the open-source vscode GitHub repository, see issue #60 for a detailed explanation. The github.com/microsoft/vscode repository (Code - OSS) is where we develop the Visual Studio Code product. Not only do we write code and work on issues there, we also publish our roadmap and monthly iteration and endgame plans. The source code is available to everyone under a standard MIT license. Visual Studio Code is a distribution of the Code - OSS repository with Microsoft specific customizations (including source code), released under a traditional Microsoft product license. See the Visual Studio Code and 'Code - OSS' Differences article for more details. Microsoft Visual Studio Code is a Microsoft licensed distribution of 'Code - OSS' that includes Microsoft proprietary assets (such as icons) and features (Visual Studio Marketplace integration, small aspects of enabling Remote Development). While these additions make up a very small percentage of the overall distribution code base, it is more accurate to say that Visual Studio Code is \"built\" on open source, rather than \"is\" open source, because of these differences. More information on what each distribution includes can be found in the Visual Studio Code and 'Code - OSS' Differences article. Most extensions link to their license on their Marketplace page or in the overview section, when you select an extension in the Extensions view. For example: If you don't find a link to the license, you may find a license in the extension's repository if it is public, or you can contact the extension author through the Q & A section of the Marketplace. Extension authors are free to choose a license that fits their business needs. While many extension authors have opted to release their source code under an open-source license, some extensions like Wallaby.js, Google Cloud Code, and the VS Code Remote Development extensions use proprietary licenses. At Microsoft, we open source our extensions whenever" }, { "data": "However, reliance on existing proprietary source code or libraries, source code that crosses into Microsoft licensed tools or services (for example Visual Studio), and business model differences across the entirety of Microsoft will result in some extensions using a proprietary license. You can find a list of Microsoft contributed Visual Studio Code extensions and their licenses in the Microsoft Extension Licenses article. You can find the VS Code version information in the About dialog box. On macOS, go to Code > About Visual Studio Code. On Windows and Linux, go to Help > About. The VS Code version is the first Version number listed and has the version format 'major.minor.release', for example '1.27.0'. You can find links to some release downloads at the top of a version's release notes: If you need a type of installation not listed there, you can manually download via the following URLs: | Download type | URL | |:-|:--| | Windows x64 System installer | https://update.code.visualstudio.com/{version}/win32-x64/stable | | Windows x64 User installer | https://update.code.visualstudio.com/{version}/win32-x64-user/stable | | Windows x64 zip | https://update.code.visualstudio.com/{version}/win32-x64-archive/stable | | Windows x64 CLI | https://update.code.visualstudio.com/{version}/cli-win32-x64/stable | | Windows Arm64 System installer | https://update.code.visualstudio.com/{version}/win32-arm64/stable | | Windows Arm64 User installer | https://update.code.visualstudio.com/{version}/win32-arm64-user/stable | | Windows Arm64 zip | https://update.code.visualstudio.com/{version}/win32-arm64-archive/stable | | Windows Arm64 CLI | https://update.code.visualstudio.com/{version}/cli-win32-arm64/stable | | macOS Universal | https://update.code.visualstudio.com/{version}/darwin-universal/stable | | macOS Intel chip | https://update.code.visualstudio.com/{version}/darwin/stable | | macOS Intel chip CLI | https://update.code.visualstudio.com/{version}/cli-darwin-x64/stable | | macOS Apple silicon | https://update.code.visualstudio.com/{version}/darwin-arm64/stable | | macOS Apple silicon CLI | https://update.code.visualstudio.com/{version}/cli-darwin-arm64/stable | | Linux x64 | https://update.code.visualstudio.com/{version}/linux-x64/stable | | Linux x64 debian | https://update.code.visualstudio.com/{version}/linux-deb-x64/stable | | Linux x64 rpm | https://update.code.visualstudio.com/{version}/linux-rpm-x64/stable | | Linux x64 snap | https://update.code.visualstudio.com/{version}/linux-snap-x64/stable | | Linux x64 CLI | https://update.code.visualstudio.com/{version}/cli-linux-x64/stable | | Linux Arm32 | https://update.code.visualstudio.com/{version}/linux-armhf/stable | | Linux Arm32 debian | https://update.code.visualstudio.com/{version}/linux-deb-armhf/stable | | Linux Arm32 rpm | https://update.code.visualstudio.com/{version}/linux-rpm-armhf/stable | | Linux Arm32 CLI | https://update.code.visualstudio.com/{version}/cli-linux-armhf/stable | | Linux Arm64 | https://update.code.visualstudio.com/{version}/linux-arm64/stable | | Linux Arm64 debian | https://update.code.visualstudio.com/{version}/linux-deb-arm64/stable | | Linux Arm64 rpm | https://update.code.visualstudio.com/{version}/linux-rpm-arm64/stable | | Linux Arm64 CLI | https://update.code.visualstudio.com/{version}/cli-linux-arm64/stable | Substitute the specific release you want in the {version} placeholder. For example, to download the Linux Arm64 debian version for 1.83.1, you would use ``` https://update.code.visualstudio.com/1.83.1/linux-deb-arm64/stable ``` You can use the version string latest, if you'd like to always download the latest VS Code stable version. Windows x86 32-bit versions are no longer actively supported after release 1.83 and could pose a security risk. | Download type | URL | |:--|:| | Windows x86 System installer | https://update.code.visualstudio.com/{version}/win32/stable | | Windows x86 User installer | https://update.code.visualstudio.com/{version}/win32-user/stable | | Windows x86 zip | https://update.code.visualstudio.com/{version}/win32-archive/stable | | Windows x86 CLI | https://update.code.visualstudio.com/{version}/cli-win32-ia32/stable | Want an early peek at new VS Code features? You can try prerelease versions of VS Code by installing the \"Insiders\" build. The Insiders build installs side by side to your stable VS Code install and has isolated settings, configurations, and extensions. The Insiders build is updated nightly so you'll get the latest bug fixes and feature updates from the day before. To install the Insiders build, go to the Insiders download page. Are there guidelines for using the icons and names? You can download the official Visual Studio Code icons and read the usage guidelines at Icons and names usage guidelines. A VS Code \"workspace\" is usually just your project root folder. VS Code uses the \"workspace\" concept in order to scope project configurations such as project-specific settings as well as config files for debugging and tasks. Workspace files are stored at the project root in a .vscode" }, { "data": "You can also have more than one root folder in a VS Code workspace through a feature called Multi-root workspaces. You can learn more in the What is a VS Code \"workspace\"? article. Yes, VS Code has a Portable Mode that lets you keep settings and data in the same location as your installation, for example, on a USB drive. For bugs, feature requests or to contact an extension author, you should use the links available in the Visual Studio Code Marketplace or use Help: Report Issue from the Command Palette. However, if there is an issue where an extension does not follow our code of conduct, for example it includes profanity, pornography or presents a risk to the user, then we have an email alias to report the issue. Once the mail is received, our Marketplace team will look into an appropriate course of action, up to and including unpublishing the extension. VS Code does a background check to detect if the installation has been changed on disk and if so, you will see the text [Unsupported] in the title bar. This is done since some extensions directly modify (patch) the VS Code product in such a way that is semi-permanent (until the next update) and this can cause hard to reproduce issues. We are not trying to block VS Code patching, but we want to raise awareness that patching VS Code means you are running an unsupported version. Reinstalling VS Code will replace the modified files and silence the warning. You may also see the [Unsupported] message if VS Code files have been mistakenly quarantined or removed by anti-virus software (see issue #94858 for an example). Check your anti-virus software settings and reinstall VS Code to repair the missing files. This section applies to macOS and Linux environments only. When VS Code is launched from a terminal (for example, via code .), it has access to environment settings defined in your .bashrc or .zshrc files. This means features like tasks or debug targets also have access to those settings. However, when launching from your platform's user interface (for example, the VS Code icon in the macOS dock), you normally are not running in the context of a shell and you don't have access to those environment settings. This means that depending on how you launch VS Code, you may not have the same environment. To work around this, when launched via a UI gesture, VS Code will start a small process to run (or \"resolve\") the shell environment defined in your .bashrc or .zshrc files. If, after a configurable timeout (via application.shellEnvironmentResolutionTimeout, defaults to 10 seconds), the shell environment has still not been resolved or resolving failed for any other reason, VS Code will abort the \"resolve\" process, launch without your shell's environment settings, and you will see an error like the following: If the error message indicates that resolving your shell environment took too long, the steps below can help you investigate what might be causing slowness. You can also increase the timeout by configuring the application.shellEnvironmentResolutionTimeout setting. But keep in mind that increasing this value means you will have to wait longer to use some of the features in VS Code, such as extensions. If you see other errors, please create an issue to get help. The process outlined below may help you identify which parts of your shell initialization are taking the most time: Note: While nvm is a powerful and useful Node.js package manager, it can cause slow shell startup times, if being run during shell" }, { "data": "You might consider package manager alternatives such as asdf or search on the internet for nvm performance suggestions. If modifying your shell environment isn't practical, you can avoid VS Code's resolving shell environment phase by launching VS Code directly from a fully initialized terminal. The Electron shell used by Visual Studio Code has trouble with some GPU (graphics processing unit) hardware acceleration. If VS Code is displaying a blank (empty) main window, you can try disabling GPU acceleration when launching VS Code by adding the Electron --disable-gpu command-line switch. ``` code --disable-gpu ``` If this happened after an update, deleting the GPUCache directory can resolve the issue. ``` rm -r ~/.config/Code/GPUCache ``` When you open a folder, VS Code will search for typical project files to offer you additional tooling (for example, the solution picker in the Status bar to open a solution). If you open a folder with lots of files, the search can take a large amount of time and CPU resources during which VS Code might be slow to respond. We plan to improve this in the future but for now you can exclude folders from the explorer via the files.exclude setting and they will not be searched for project files: ``` \"files.exclude\": { \"/largeFolder\": true } ``` Microsoft ended support and is no longer providing security updates for Windows 7, Windows 8, and Windows 8.1. VS Code desktop versions starting with 1.71 (August 2022) no longer run on Windows 7 and starting with 1.80 (June 2023) will no longer run on Windows 8 and 8.1. You will need to upgrade to a newer Windows version to use later versions of VS Code. VS Code will no longer provide product updates or security fixes on old Windows versions. VS Code version 1.70.3 is the last available release for Windows 7 users and version 1.79 will be the last available release for Windows 8 and 8.1 users. You can learn more about upgrading your Windows version at support.microsoft.com. Additionally, 32-bit OEM support has been dropped with Windows 10, version 2004. The last stable VS Code version to support Windows 32-bit is 1.83 (September 2023). You will need to update to the 64-bit release. VS Code desktop version starting with 1.83 (September 2023) is deprecating support for macOS Mojave (version 10.14 and older). Starting with VS Code 1.86 (January 2024), we will stop updating VS Code on macOS Mojave (version 10.14 and older). You will need to upgrade to a newer macOS version to use later versions of VS Code. VS Code will no longer provide product updates or security fixes on macOS Mojave (versions 10.14 and older) and VS Code version 1.85 will be the last available release for macOS Mojave (10.14 and older). You can learn more about upgrading your macOS version at support.apple.com. Starting with VS Code release 1.86.1 (January 2024), VS Code desktop is only compatible with Linux distributions based on glibc 2.28 or later, for example, Debian 10, RHEL 8, or Ubuntu 20.04. If you are unable to upgrade your Linux distribution, the recommended alternative is to use our web client. If you would like to use the desktop version, then you can download the VS Code release 1.85 from here. Depending on your platform, make sure to disable updates to stay on that version. A good recommendation is to set up the installation with Portable Mode. You can ask questions and search for answers on Stack Overflow and enter issues and feature requests directly in our GitHub repository. If you'd like to contact a professional support engineer, you can open a ticket with" } ]
{ "category": "App Definition and Development", "file_name": "docs.akuity.io.md", "project_name": "Akuity", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "The Akuity Platform is a fully-managed Kubernetes application delivery platform powered by Argo. To get up and running as quickly as possible, checkout the getting started guide that will walk you through the fundamentals of the Akuity Platform in just minutes and leave you with a fully operational Argo CD instance. Argo is a set of open source tools for Kubernetes to run workflows, manage clusters, and implement the GitOps operational framework. For more info about Argo go to the Argo Project website. To use this documentation and the Akuity Platform effectively, basic familiarity with the following technologies and concepts is suggested: If you have any feedback regarding this documentation or the product, please send us an email. For help please reach out to us through the contact form on our website." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Appveyor", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Use GitHub, Bitbucket button to sign up with your existing developer account (OAuth) or create an AppVeyor account using your email and password. Authorize GitHub or BitBucket to list your repositories. For open-source project developers who are using the same GitHub account for both personal and private company repositories AppVeyor offers a choice between two scopes: public repositories exclusively or public and private. For every project AppVeyor will configure webhooks for its repository to automatically start a build when you push the changes. For every private project AppVeyor will add an SSH public key (deployment key) to the clone repository on the build machine. To kick-off a new build you can either push any changes to your repository or click New build on the project details screen. AppVeyor will provision a new build virtual machine, clone the repository and pass the project through build, test and deploy phases (see Build pipeline). Start from Build configuration to learn how to configure build." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Brigade", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "You are viewing docs for Brigade v2. Click here for v1 docs. This QuickStart presents a comprehensive introduction to Brigade. You will install Brigade with default configuration on a local, development-grade cluster, create a project and an event, watch Brigade handle that event, then clean up. If you prefer learning through video, check out the video adaptation of this guide on our YouTube channel. The default configuration used in this guide is appropriate only for evaluating Brigade on a local, development-grade cluster and is not appropriate for any shared cluster especially a production one. See our Deployment Guide for instructions suitable for shared or production clusters. We have tested these instructions on a local KinD cluster. This section specifically covers installation of Brigades server-side components into a local, development-grade Kubernetes cluster. Well install the Brigade CLI later when were ready to take Brigade for a test drive. Enable Helms experimental OCI support: POSIX ``` $ export HELMEXPERIMENTALOCI=1 ``` PowerShell ``` $env:HELMEXPERIMENTALOCI=1 ``` Run the following commands to install Brigade with default configuration: ``` $ helm install brigade \\ oci://ghcr.io/brigadecore/brigade \\ --version v2.6.0 \\ --create-namespace \\ --namespace brigade \\ --wait \\ --timeout 300s ``` Installation and initial startup may take a few minutes to complete. If the deployment fails, proceed to the troubleshooting section. Since you are running Brigade locally, use port forwarding to make the Brigade API available via the local network interface: POSIX ``` $ kubectl --namespace brigade port-forward service/brigade-apiserver 8443:443 &>/dev/null & ``` PowerShell ``` kubectl --namespace brigade port-forward service/brigade-apiserver 8443:443 *> $null ``` In general, the Brigade CLI, brig, can be installed by downloading the appropriate pre-built binary from our releases page to a directory on your machine that is included in your PATH environment variable. On some systems, it is even easier than this. Below are instructions for common environments: Linux ``` $ curl -Lo /usr/local/bin/brig https://github.com/brigadecore/brigade/releases/download/v2.6.0/brig-linux-amd64 $ chmod +x /usr/local/bin/brig ``` macOS The popular Homebrew package manager provides the most convenient method of installing the Brigade CLI on a Mac: ``` $ brew install brigade-cli ``` Alternatively, you can install manually by directly downloading a pre-built binary: ``` $ curl -Lo /usr/local/bin/brig https://github.com/brigadecore/brigade/releases/download/v2.6.0/brig-darwin-amd64 $ chmod +x /usr/local/bin/brig ``` Windows ``` mkdir -force $env:USERPROFILE\\bin (New-Object Net.WebClient).DownloadFile(\"https://github.com/brigadecore/brigade/releases/download/v2.6.0/brig-windows-amd64.exe\", \"$ENV:USERPROFILE\\bin\\brig.exe\") $env:PATH+=\";$env:USERPROFILE\\bin\" ``` The script above downloads brig.exe and adds it to your PATH for the current" }, { "data": "Add the following line to your PowerShell Profile if you want to make the change permanent: ``` $env:PATH+=\";$env:USERPROFILE\\bin\" ``` In this section, well be logging in as the root user. This option should typically be disabled in a production-grade Brigade deployment. Read more about user authentication here. To authenticate to Brigade as the root user, you first need to acquire the auto-generated root user password: POSIX ``` $ export APISERVERROOTPASSWORD=$(kubectl get secret --namespace brigade brigade-apiserver --output jsonpath='{.data.root-user-password}' | base64 --decode) ``` PowerShell ``` $env:APISERVERROOTPASSWORD=$(kubectl get secret --namespace brigade brigade-apiserver --output jsonpath='{.data.root-user-password}' | base64 --decode) ``` Then: POSIX ``` $ brig login --insecure --server https://localhost:8443 --root --password \"${APISERVERROOTPASSWORD}\" ``` PowerShell ``` brig login --insecure --server https://localhost:8443 --root --password \"$env:APISERVERROOTPASSWORD\" ``` The --insecure flag instructs brig login to ignore the self-signed certificate used by our local installation of Brigade. If the brig login command hangs or fails, double-check that port-forwarding for the brigade-apiserver service was successfully completed in the previous section. A Brigade project pairs event subscriptions with worker (event handler) configuration. Rather than create a project definition from scratch, well accelerate the process using the brig init command: ``` $ mkdir first-project $ cd first-project $ brig init --id first-project ``` This will create a project definition similar to the following in .brigade/project.yaml. It subscribes to exec events emitted from a source named brigade.sh/cli. (This type of event is easily created using the CLI, so it is great for demo purposes.) When such an event is received, the embedded script is executed. The script itself branches depending on the source and type of the event received. For an exec event from the source named brigade.sh/cli, this script will spawn and execute a simple Hello World! job. For any other type of event, this script will do nothing. ``` apiVersion: brigade.sh/v2 kind: Project metadata: id: first-project description: My new Brigade project spec: eventSubscriptions: source: brigade.sh/cli types: exec workerTemplate: logLevel: DEBUG defaultConfigFiles: brigade.ts: | import { events, Job } from \"@brigadecore/brigadier\" // Use events.on() to define how your script responds to different events. // The example below depicts handling of \"exec\" events originating from // the Brigade CLI. events.on(\"brigade.sh/cli\", \"exec\", async event => { let job = new Job(\"hello\", \"debian:latest\", event) job.primaryContainer.command = [\"echo\"] job.primaryContainer.arguments = [\"Hello, World!\"] await job.run() }) events.process() ``` The previous command only generated a project definition from a template. We still need to upload this definition to Brigade to complete project creation: ``` $ brig project create --file" }, { "data": "``` To see that Brigade now knows about this project, use brig project list: ``` $ brig project list ID DESCRIPTION AGE first-project My new Brigade project 1m ``` With our project defined, we are now ready to manually create an event and watch Brigade handle it: ``` $ brig event create --project first-project --follow ``` Below is example output: ``` Created event \"2cb85062-f964-454d-ac5c-526cdbdd2679\". Waiting for event's worker to be RUNNING... 2021-08-10T16:52:01.699Z INFO: brigade-worker version: v2.6.0 2021-08-10T16:52:01.701Z DEBUG: writing default brigade.ts to /var/vcs/.brigade/brigade.ts 2021-08-10T16:52:01.702Z DEBUG: using npm as the package manager 2021-08-10T16:52:01.702Z DEBUG: path /var/vcs/.brigade/node_modules/@brigadecore does not exist; creating it 2021-08-10T16:52:01.702Z DEBUG: polyfilling @brigadecore/brigadier with /var/brigade-worker/brigadier-polyfill 2021-08-10T16:52:01.703Z DEBUG: compiling brigade.ts with flags --target ES6 --module commonjs --esModuleInterop 2021-08-10T16:52:04.210Z DEBUG: running node brigade.js 2021-08-10T16:52:04.360Z [job: hello] INFO: Creating job hello 2021-08-10T16:52:06.921Z [job: hello] DEBUG: Current job phase is SUCCEEDED ``` By default, Brigades scheduler scans for new projects every thirty seconds. If Brigade is slow to handle your first event, this may be why. If you want to keep your Brigade installation, run the following command to remove the example project created in this QuickStart: ``` $ brig project delete --id first-project ``` Otherwise, you can remove all resources created in this QuickStart using: ``` $ helm delete brigade -n brigade ``` You now know how to install Brigade on a local, development-grade cluster, define a project, and manually create an event. Continue on to the Read Next document where we suggest more advanced topics to explore. A common cause for failed Brigade deployments is low disk space on the cluster node. In a local, development-grade cluster on macOS or Windows, this could be because insufficient disk space is allocated to Docker Desktop, or the space allocated is nearly full. If this is the case, it should be evident by examining logs from Brigades MongoDB or ActiveMQ Artemis pods. If the logs include messages such as No space left on device or Disk Full!, then you need to free up disk space and retry the installation. Running docker system prune is one way to recover disk space. After you have freed up disk space, remove the bad installation, and then retry using the following commands: ``` $ helm uninstall brigade -n brigade $ helm install brigade \\ oci://ghcr.io/brigadecore/brigade \\ --version v2.6.0 \\ --namespace brigade \\ --wait \\ --timeout 300s ``` If the brig login command hangs, check that you included the --insecure (or -k) flag. This flag is required because the default configuration utilized by this QuickStart makes use of a self-signed certificate. 2017 - 2022 The Brigade Authors Powered by Hugo. Theme by TechDoc. Designed by Thingsym." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Buildkite", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Explore our guides, examples, and reference documentation to learn Buildkite. Powerful CI/CD built to scale on your infrastructure Real-time tracking and monitoring for your tests How to define steps in YAML and create dynamic pipelines An introduction to the Buildkite agent Best practices for using secrets in pipelines How to get the most from artifacts in pipelines A collection of examples that demonstrate pipeline features A list of common terms that describe key concepts" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Cartographer", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Cartographer is a Supply Chain Choreographer for Kubernetes. It enables App Operators to create supply chains, pre-approved paths that standardize how multiple app teams deliver applications to end users. Cartographer enables this within the Kubernetes ecosystem, allowing supply chains to be composed of resources from an organizations existing toolchains (e.g. Jenkins). Each pre-approved supply chain creates a paved road to production, orchestrating test, build, scan, deploy. Developers are freed to focus on delivering value to their users while App Operators retain the peace of mind that all code in production has passed through every step of an approved workflow. Cartographer allows users to define every step that an application must go through to reach production. Users achieve this with the Supply Chain abstraction, see Spec Reference. The supply chain consists of resources that are specified via Templates. A template acts as a wrapper for a Kubernetes resource, allowing Cartographer to integrate each well known tool into a cohesive whole. There are four types of templates: Contrary to many other Kubernetes native workflow tools that already exist in the market, Cartographer does not run any of the objects themselves. Instead, it leverages the controller pattern at the heart of Kubernetes. Cartographer creates an object on the cluster and the controller responsible for that resource type carries out its control loop. Cartographer monitors the outcome of this work and captures the outputs. Cartographer then applies these outputs in the following templates in the supply chain. In this manner, a declarative chain of Kubernetes resources is created. The simplest explanation of Kubernetes' control loops is that an object is created with a desired state and a controller moves the cluster closer to the desired state. For most Kubernetes objects, this pattern this includes the ability of an actor to update the desired state (to update the spec of an object), and have the controller move the cluster toward the new desired state. But not all Kubernetes resources are updatable; this class of immutable resources includes resources of the CI/CD tool Tekton. Cartographer enables coordination of these resources with its immutable pattern: rather than updating an object and monitoring for its new outputs, Cartographer creates a new immutable object and reads the outputs of the new object. While the supply chain is operator facing, Cartographer also provides an abstraction for developers called workloads. Workloads allow developers to create application specifications such as the location of their repository, environment variables and service claims. By design, supply chains can be reused by many workloads. This allows an operator to specify the steps in the path to production a single time, and for developers to specify their applications independently but for each to use the same path to production. The intent is that developers are able to focus on providing value for their users and can reach production quickly and easily, while providing peace of mind for app operators, who are ensured that each application has passed through the steps of the path to production that theyve defined. 2024 Cartographer Authors. A VMware-backed project. This Website Does Not Use Cookies or Other Tracking Technology" } ]
{ "category": "App Definition and Development", "file_name": "v0.7.0.md", "project_name": "Cartographer", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Cartographer is a Supply Chain Choreographer for Kubernetes. It enables App Operators to create supply chains, pre-approved paths that standardize how multiple app teams deliver applications to end users. Cartographer enables this within the Kubernetes ecosystem, allowing supply chains to be composed of resources from an organizations existing toolchains (e.g. Jenkins). Each pre-approved supply chain creates a paved road to production, orchestrating test, build, scan, deploy. Developers are freed to focus on delivering value to their users while App Operators retain the peace of mind that all code in production has passed through every step of an approved workflow. Cartographer allows users to define every step that an application must go through to reach production. Users achieve this with the Supply Chain abstraction, see Spec Reference. The supply chain consists of resources that are specified via Templates. A template acts as a wrapper for a Kubernetes resource, allowing Cartographer to integrate each well known tool into a cohesive whole. There are four types of templates: Contrary to many other Kubernetes native workflow tools that already exist in the market, Cartographer does not run any of the objects themselves. Instead, it leverages the controller pattern at the heart of Kubernetes. Cartographer creates an object on the cluster and the controller responsible for that resource type carries out its control loop. Cartographer monitors the outcome of this work and captures the outputs. Cartographer then applies these outputs in the following templates in the supply chain. In this manner, a declarative chain of Kubernetes resources is created. The simplest explanation of Kubernetes' control loops is that an object is created with a desired state and a controller moves the cluster closer to the desired state. For most Kubernetes objects, this pattern this includes the ability of an actor to update the desired state (to update the spec of an object), and have the controller move the cluster toward the new desired state. But not all Kubernetes resources are updatable; this class of immutable resources includes resources of the CI/CD tool Tekton. Cartographer enables coordination of these resources with its immutable pattern: rather than updating an object and monitoring for its new outputs, Cartographer creates a new immutable object and reads the outputs of the new object. While the supply chain is operator facing, Cartographer also provides an abstraction for developers called workloads. Workloads allow developers to create application specifications such as the location of their repository, environment variables and service claims. By design, supply chains can be reused by many workloads. This allows an operator to specify the steps in the path to production a single time, and for developers to specify their applications independently but for each to use the same path to production. The intent is that developers are able to focus on providing value for their users and can reach production quickly and easily, while providing peace of mind for app operators, who are ensured that each application has passed through the steps of the path to production that theyve defined. 2024 Cartographer Authors. A VMware-backed project. This Website Does Not Use Cookies or Other Tracking Technology" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "CircleCI", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This document provides the basic concepts that a longtime Jenkins user needs to know when migrating from Jenkins to CircleCI. CircleCI is a very different product from Jenkins, with a lot of different concepts on how to manage CI/CD, but it will not take long to migrate the basic functionality of your Jenkins build to CircleCI. To get started quickly, try these steps: Getting Started: Run your first green build on CircleCI using the guide. Copy-paste your commands from Execute Shell: To simply duplicate your project exactly as it is in Jenkins, add a file called config.yml to a .circleci/ directory of your project with the following content: ``` steps: run: \"Add any bash command you want here\" run: command: | echo \"Arbitrary multi-line bash\" echo \"Copy-paste from 'Execute Shell' in Jenkins\" ``` Some programs and utilities are pre-installed on CircleCI Images, but anything else required by your build must be installed with a run step. Your projects dependencies may be cached for the next build using the savecache and restorecache steps, so that they only need to be fully downloaded and installed once. Manual configuration: If you were using plugins or options other than Execute Shell in Jenkins to run your build steps, you may need to manually port your build from Jenkins. Use the Configuring CircleCI document as a guide to the complete set of CircleCI configuration keys. Jenkins projects are generally configured in the Jenkins web UI, and the settings are stored on the filesystem of the Jenkins server. This makes it difficult to share configuration information within a team or organization. Cloning a repository from your VCS does not copy the information stored in Jenkins. Settings stored on the Jenkins server also make regular backups of all Jenkins servers required. Almost all configuration for CircleCI builds are stored in a file called .circleci/config.yml that is located in the root directory of each project. Treating CI/CD configuration like any other source code makes it easier to back up and share. Just a few project settings, like secrets, that should not be stored in source code are stored (encrypted) in the CircleCI app. It is often the responsibility of an ops person or team to manage Jenkins servers. These people generally get involved with various CI/CD maintenance tasks like installing dependencies and troubleshooting issues. It is never necessary to access a CircleCI environment to install dependencies, because every build starts in a fresh environment where custom dependencies must be installed automatically, ensuring that the entire build process is truly automated. Troubleshooting in the execution environment can be done easily and securely by any developer using CircleCIs SSH feature. If you install CircleCI on your own hardware, the divide between the host OS (at the metal/VM level) and the containerized execution environments can be extremely useful for security and ops (see Your builds in containers below). Ops team members can do what they need to on the host OS without affecting builds, and they never need to give developers access. Developers, on the other hand, can use CircleCIs SSH feature to debug builds at the container level as much as they like without affecting ops. You have to use plugins to do almost anything with Jenkins, including checking out a Git repository. Like Jenkins itself, its plugins are Java-based, and a bit" }, { "data": "They interface with any of several hundred possible extension points in Jenkins and can generate web views using JSP-style tags and views. All core CI functionality is built into CircleCI. Features such as checking out source from a VCS, running builds and tests with your favorite tools, parsing test output, and storing artifacts are plugin-free. When you do need to add custom functionality to your builds and deployments, you can do so with a couple snippets of bash in appropriate places. It is possible to make a Jenkins server distribute your builds to a number of agent machines to execute the jobs, but this takes a fair amount of work to set up. According to Jenkins docs, Jenkins is not a clustering middleware, and therefore it doesnt make this any easier. CircleCI distributes builds to a large fleet of builder machines by default. If you use CircleCI cloud, then this just happens for you - your builds do not queue unless you are using all the build capacity in your plan. If you use CircleCI server, then you will appreciate that CircleCI does manage your cluster of builder machines without the need for any extra tools. Talking about containerization in build systems can be complicated, because arbitrary build and test commands can be run inside of containers as part of the implementation of the CI/CD system, and some of these commands may involve running containers. Both of these points are addressed below. Also note that Docker is an extremely popular tool for running containers, but it is not the only one. Both the terms container (general) and Docker (specific) will be used. If you use a tool like Docker in your workflow, you will likely also want to run it on CI/CD. Jenkins does not provide any built-in support for this, and it is up to you to make sure it is installed and available within your execution environment. Docker has long been one of the tools that is pre-installed on CircleCI, so you can access Docker in your builds by adding docker as an executor in your .circleci/config.yml file. See the Introduction to Execution Environments page for more info. Jenkins normally runs your build in an ordinary directory on the build server, which can cause lots of issues with dependencies, files, and other state gathering on the server over time. There are plugins that offer alternatives, but they must be manually installed. CircleCI runs all Linux and Android builds in dedicated containers, which are destroyed immediately after use (macOS builds run in single-use VMs). This creates a fresh environment for every build, preventing unwanted cruft from getting into builds. One-off environments also promote a disposable mindset that ensures all dependencies are documented in code and prevents snowflake build servers. If you run builds on your own hardware with CircleCI, running all builds in containers allows you to heavily utilize the hardware available to run builds. It is possible to run multiple tests in parallel on a Jenkins build using techniques like multithreading, but this can cause subtle issues related to shared resources like databases and filesystems. CircleCI lets you increase the parallelism in any projects settings so that each build for that project uses multiple containers at once. Tests are evenly split between containers allowing the total build to run in a fraction of the time it normally would. Unlike with simple multithreading, tests are strongly isolated from each other in their own environments. You can read more about parallelism on CircleCI in the Running Tests in Parallel document." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Cloudbees Codeship", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Use our knowledge hive to excel at software delivery. By monitoring plugin versions and comparing the configuration of your instance to plugins identified by CloudBees as verified, trusted, or unverified, Beekeeper provides \"envelope enforcement\" to provide stability and security to CloudBees CI, CloudBees Jenkins Distribution, and CloudBees Jenkins Platform. Jenkins Pipelines is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. Understand how to define and administer pipelines for CloudBees CI, CloudBees Jenkins Distribution, and CloudBees Jenkins Platform. Learn about installation prerequisites for CloudBees Build Acceleration, how to install, and how to run your first build. CloudBees Software Delivery Automation enables enterprises to optimize their software delivery process for increased innovation and security by connecting, automating, and orchestrating the tools and functions across development, operations, and shared service teams. CloudBees CI is an end-to-end continuous software delivery system. It offers stable releases with monthly updates, as well as additional proprietary tools and enterprise features to enhance the manageability and security of Jenkins. CloudBees CD/RO is an Adaptive Release Orchestration platform that eliminates scripts and automates deployment pipelines while verifying performance, quality and security. CloudBees Previews lets you deploy project artifacts from pull requests. It then creates an isolated preview environment for the artifacts that lets you or your collaborators inspect the changes you introduced. The preview environments are short-lived, ephemeral environments that exist for the life of a given pull request. The CloudBees platform is a DevOps integration solution that enables organizations, regardless of where they are in their DevOps journey, to advance their DevOps practice. The platform can leverage most existing processes and CI/CD tool chains. With CloudBees Feature Management, accelerate your development speed and reduce risk by decoupling feature deployment from code releases. Control every change in your app in real-time while measuring the business and technical impact. CloudBees Release Orchestration SaaS is a DevOps integration solution that enables organizations, regardless of where they are in their DevOps journey, to advance their DevOps practice. CloudBees Release Orchestration SaaS can leverage most existing processes and CI/CD tool chains. CloudBees Build Acceleration (CloudBees Accelerator) reduces cycle time and iterates faster with fault-tolerant workload distribution. Smart load balancing ensures optimal use of cores across the cluster. Dynamic resource provisioning provides instantaneous scale up/down. Ship faster with CI/CD as a Service. No more waiting for builds to start when speed and reliability are critical. Flexible, simple CI/CD for small and growing teams. Need more information? Access additional resources and product help from CloudBees support" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Codefresh", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "By executing any applicable Order that references these Terms of Service (collectively the Agreement), the Parties to the Agreement acknowledge and agree that these binding standard terms and conditions (the Terms) shall apply: The following Terms dictate the Agreement between Codefresh, Inc. a Delaware corporation, or the Codefresh entity set forth in the applicable Order if different, and its respective affiliates, (collectively, Codefresh) and the licensee identified in the Agreement (Licensee or You). Your right to access and use the Service, whether or not an Agreement has been executed between Codefresh and Licensee (or an entity that Licensee represents), is expressly conditioned on acceptance of these Terms. By accessing or using the Services provided by Codefresh, Licensee agrees to be bound by and abide by these Terms. These Terms shall apply to all use by Licensee and Users of the Service. 1.1.Definitions. Capitalized terms not defined herein shall be given the meaning set forth in the applicable Order. (i)Account means a user account created to access the Codefresh platform. (ii)Codefresh Content means data, Documentation, reports, text, images, sounds, video, and content made available through any of the Service. (iii)Documentation means the user documentation that Codefresh makes generally available to users at https://codefresh.io/docs/. (iv)Licensee means an individual, entity, or other legal person using the Service. (v)Licensee Content means all data, software, information, text, audio files, graphic files, content, and other materials that you upload, post, deliver, provide, or otherwise transmit or store in connection with or relating to the Service submitted by or for Licensee to the Service or collected and processed by or for Licensee using the Service, excluding Codefresh Content and Non-Codefresh Applications. (vi)Malicious Code means code, files, scripts, agents, or programs intended to do harm, including, for example, viruses, worms, time bombs and Trojan horses. (vii)Non-Codefresh Application(s) means a web-based or offline software application that is provided by Licensee or a third party and interoperates with the Service. (viii)Order means a Service order form, other ordering document, web-based, or email-based ordering mechanism or registration process for the Service. (ix)Service means the Site, including related services provided through the site, or the Software. (x)Site means Codefreshs website, located at https://support.codefresh.io. (xi)SLA means the service level agreement in effect as of the Orders Effective Date, which can be found at https://codefresh.io/docs/docs/terms-and-privacy-policy/privacy-policy/. (xii)Software means any software developed and made available to Licensee as set forth in an applicable Order, which may include Codefreshs build, test, and deployment docker container software tools, services, and related technologies. (xiii)User means an individual who is authorized by Licensee to use the Service, for whom Licensee (or Codefresh at Licensees request) has supplied a user identification and password either manually or using a Non-Codefresh Application. Users may include Licensees employees, consultants, contractors, agents, and third parties that Licensee transacts business with. 1.2.Codefresh may provide its Services to you through the Site or on-premises as set forth in an applicable Order. By entering into an Order or otherwise downloading, accessing, or using the Service, Licensee unconditionally accepts and agrees to, and represents that it has the authority to bind itself and its affiliates to, all of these Terms. 2.1.Scope of Service. For all cloud-based Services provided via Codefreshs remote platform hereunder (such platform, the Codefresh Cloud), the Scope shall mean both the authorized number of Users and number of Cloud Credits (as defined below) set forth in the applicable Order. 2.2.Cloud Credits. Licensee may purchase Cloud Credits, which allow Users to use the Service on Codefresh Cloud in a specific capacity (each such unit, a Cloud" }, { "data": "Cloud Credits are available during the Term and expire upon termination. Cloud Credits are not redeemable for cash and are not refundable as cash under any circumstances. Cloud Credits are not transferable and may only be applied to Licensees account. Cloud Credits usage will be calculated based on the infrastructure size Licensee uses, as set out in the applicable Order, and as detailed in the below chart: | Machine Size | CPU | Memory | Credit/minute | |:|:|:|-:| | S | 1 | 1 GB | 5 | | M | 2 | 4 GB | 10 | | L | 4 | 8 GB | 20 | | XL | 8 | 16 GB | 40 | | XXL | 16 | 32 GB | 80 | 3.1.Scope of Services. For all Services provided to Licensee on-premises by Codefresh (the On-Premises Services), the Scope shall mean the authorized number of Users as set forth in the applicable Order. 3.2.Equipment Maintenance. Licensee shall be responsible for obtaining and maintaining any equipment and ancillary services or tools needed to connect to, access or otherwise use the Service, including, without limitation, modems, hardware, server, software, operating system, networking, web servers, long distance, and local telephone service (collectively, Equipment). Licensee shall be responsible for ensuring that such Equipment is compatible with the Service (and, to the extent applicable, the Software) and complies with all configurations and specifications set forth in the Documentation. 4.1.License. Subject to these Terms and payment of all fees described in an Order, during the Term Codefresh grants Licensee and each User a limited, non-sublicensable, non-exclusive, non-transferable license to use the object code of any Software and Codefresh Content solely in connection with the Service and any terms and procedures Codefresh may prescribe from time to time. 4.2.Restrictions. Subject to these Terms, Licensee and Users may access and use the Service and Codefresh Content only for lawful purposes. All rights, title, and interest in and to the Service and its components, Codefresh Content and all related intellectual property rights will remain with and belong exclusively to Codefresh. Licensee shall maintain the copyright notice and any other notices that appear on the Service on any copies and any media. Neither Licensee nor any User shall directly or indirectly (nor shall they allow any third party to) (i) modify, reverse engineer, or attempt to hack or otherwise discover any source code or underlying code, ideas, or algorithms of the Service (except to the extent that applicable law prohibits reverse engineering restrictions), (ii) sell, resell, license, sublicense, provide, lease, lend, use for timesharing, or service bureau purposes or otherwise use or allow others to use the Service or Codefresh Content for the benefit of any third party, (iii) use the Service or Codefresh Content, or allow the transfer, transmission, export, or re-export of the Service or Content or portion thereof, in violation of any export control laws or regulations administered by the" }, { "data": "Commerce Department, OFAC, or any other government agency, (iv) use the Service to store or transmit infringing, libelous, or otherwise unlawful or tortious material, or to store or transmit material in violation of third-party privacy or intellectual property rights, (v) use the Service to store or transmit Malicious Code, (vi) interfere with or disrupt the integrity or performance of the Service or its components, (vii) attempt to gain unauthorized access to the Service or its related systems or networks, (viii) permit direct or indirect access to or use of any Service or Codefresh Content in a way that circumvents a contractual usage limit, (ix) copy the Service or any part, feature, function or user interface thereof, access the Service in order to build a competitive product or service, or (x) use the Service for any purpose other than as expressly licensed herein. 4.3.Licensee Service Obligations. Any User of the Service must be thirteen (13) years old or older to use the Service. Licensee shall (i) ensure and be responsible for Users compliance with these Terms, (ii) be responsible for the quality and legality of Licensee Content and the means by which Licensee acquired Licensee Content, (iii) use commercially reasonable efforts to prevent unauthorized access to or use of the Service, and notify Codefresh promptly of any such unauthorized access or use, (iv) use the Service only in accordance with the Codefreshs Service documentation and applicable laws and government regulations, and (v) comply with terms of service of Non-Codefresh Applications with which Licensee uses the Service. Licensee and Users are responsible for maintaining the security of Users accounts and passwords. Codefresh cannot and shall not be liable for any loss or damage from Licensees or any Users failure to comply with this security obligation. Licensee and Users may not access the Service, if they are Codefreshs direct competitor, except with Codefreshs prior written consent. In addition, Licensee and Users may not access the Service for purposes of monitoring its availability, performance, or functionality, or for any other benchmarking or competitive purposes. 4.4.Enforcement. Licensee shall promptly notify Codefresh of any suspected or alleged violation of these Terms and shall cooperate with Codefresh with respect to: (i) investigation by Codefresh of any suspected or alleged violation of these Terms and (ii) any action by Codefresh to enforce these Terms. Codefresh may, in its sole discretion, suspend or terminate any Users access to the Service with or without written notice to Licensee in the event that Codefresh reasonably determines that a User has violated these Terms. Licensee shall be liable for any violation of these Terms by any User. 4.5.Excess Use. Should Licensee use the Service beyond the applicable Scope (Excess Use), Codefresh shall invoice Licensee for the Excess Use at Codefreshs current pricing plans, such that Licensee is billed in accordance with the actual usage of the Service. To verify any Excess Use, and to extent Licensee uses On-Premises Services, Licensee will maintain, and Codefresh will be entitled to audit, any records relevant to Licensee's use of the Service hereunder. Codefresh may audit such records on reasonable notice at Codefresh's cost (or if the audits reveal material non-compliance with these Terms, at Licensee's cost), including without limitation, to confirm number of Users and/or Excess Use. From time to time, Licensee may be invited to try certain products at no charge for a free trial or evaluation period or if such products are not generally available to licensees (collectively, Trial License). Trial Licenses will be designated or identified as beta, pilot, evaluation, trial, or similar. Notwithstanding anything to the contrary herein, Trial Licenses are licensed for Licensees internal evaluation purposes only (and not for production use), are provided as is without warranty or indemnity of any kind and may be subject to additional terms. Unless otherwise stated, any Trial Licenses shall expire thirty (30) days from the trial start date. Notwithstanding the foregoing, Codefresh may discontinue Trial Licenses at any time at its sole discretion and may never make any Trial Licenses generally available. Codefresh will have no liability for any harm or damage arising out of or in connection with any Trial Licenses. 6.1.Account" }, { "data": "As part of the registration process, each User shall generate a username and password for its Account either manually or through a Non-Codefresh Application. Each User is responsible for maintaining the confidentiality of their login, password, and Account and for all activities that occur under any such logins or the Account. Codefresh reserves the right to access Licensees and any Users Account in order to respond to Licensees and Users requests for technical support. Codefresh has the right, but not the obligation, to monitor the Service, Codefresh Content, or Licensee Content, to the extent CodeFresh has access. Licensee further agrees that Codefresh may remove or disable any Codefresh Content at any time for any reason (including, but not limited to, upon receipt of claims or allegations from third parties or authorities relating to such Codefresh Content), or for no reason at all. 6.2. Accessing the Service. Licensee and its Users may enable or log in to the Service via certain Non-Codefresh Applications, such as GitHub. By logging into or directly integrating these Non-Codefresh Applications into the Service, Codefresh Users may have access to additional features and capabilities. To take advantage of such features and capabilities, Codefresh may ask Users to authenticate, register for, or log into Non-Codefresh Applications on the websites of their respective providers. As part of such integration, the Non-Codefresh Applications will provide Codefresh with access to certain information that Users have provided to such Non-Codefresh Applications, and Codefresh will use, store, and disclose such information in accordance with Codefreshs Privacy Policy located at https://codefresh.com/privacy/. The manner in which Non-Codefresh Applications use, store, and disclose Licensee and User information is governed solely by the policies of the third parties operating the Non-Codefresh Applications, and Codefresh shall have no liability or responsibility for the privacy practices or other actions of any third-party site or service that may be enabled within the Service. In addition, Codefresh is not responsible for the accuracy, availability, or reliability of any information, content, goods, data, opinions, advice, or statements made available in connection with Non-Codefresh Applications. As such, Codefresh shall not be liable for any damage or loss caused or alleged to be caused by or in connection with use of or reliance on any such Non-Codefresh Applications. Codefresh enables these features merely as a convenience and the integration or inclusion of such features does not imply an endorsement or recommendation. 6.3.Support. Codefresh will provide Licensee with maintenance and support services in accordance with and subject to the SLA. Community-based support is also available via Codefreshs Discuss site located at https://discuss.codefresh.com (or successor URL) for the Service at no additional charge. Upgraded support is available if purchased pursuant to an Order. 7.1.Licensee shall pay Codefresh the fees set forth in an Order in accordance with the terms set forth therein; provided that Codefresh may change any applicable fees upon thirty (30) days notice at any time and such new fees shall become effective for any subsequent renewal Term. All payments shall be made in U.S. dollars. Any payments more than thirty (30) days overdue will bear a late payment fee of one and one-half percent (1.5%) per month or the maximum rate allowed by law, whichever is lower. In addition, Licensee will pay all taxes, shipping, duties, withholdings, and similar expenses, as well as all pre-approved out of pocket expenses incurred by Codefresh in connection with any consulting and/or support services, promptly upon invoice. If Licensee is paying any fees by credit card, Licensee shall provide Codefresh complete and accurate information regarding the applicable credit" }, { "data": "Licensee represents and warrants that all such information is correct and that Licensee is authorized to use such credit card. Licensee will promptly update its account information with any changes (for example, a change in billing address or credit card expiration date) that may occur. Licensee hereby authorizes Codefresh to bill such credit card in advance on a periodic basis in accordance with these Terms and the applicable Order, and Licensee further agrees to pay any charges so incurred. 7.2.For any upgrade in a subscription level for a month-to-month service plan, Codefresh shall automatically charge Licensee the new subscription fee, effective as of the date the service upgrade is requested and for each subsequent one-month recurring cycle pursuant to the billing method applicable to Licensee. If Codefresh is providing Licensee the Service pursuant to a yearly service plan, Codefresh will immediately charge Licensee any increase in subscription level plan cost pursuant to the billing method applicable to Licensee, prorated for the remaining Term of Licensees yearly billing cycle; provided, however, any decrease in a subscription level plan cost shall only take effect upon the renewal date of the then current yearly service plan. Licensees downgrading its subscription level may cause the loss of features or capacity of Licensees Account. Codefresh does not accept any liability for such loss. 7.3.If any amount owing by Licensee under these Terms for the Service is thirty (30) or more days overdue (or ten (10) or more days overdue in the case of amounts Licensee has authorized Codefresh to charge to Licensees credit card), Codefresh may, in its sole discretion and without limiting its other rights and remedies, suspend Licensees and any Users access to the Service and/or otherwise limit the functionality of the Service until such amounts are paid in full. 7.4.Licensee agrees that its purchases are not contingent on the delivery of any future functionality or features, or dependent on any oral or written public comments made by Codefresh regarding future functionality or features. 8.1.These Terms shall continue in effect for the initial term and any renewal term as specified in an Order (collectively, the Term). If either party materially breaches these Terms, the other party shall have the right to terminate the applicable Order and, in the event that no Order exists, these Terms (and, in each case, all licenses granted herein) upon thirty (30) days (ten (10) days in the case of non-payment and immediately in the case of a material breach) written notice of any such breach, unless such breach is cured during such notice period. In the case of a free trial or Codefresh otherwise providing the Service at no cost to a Licensee, Codefresh shall have, upon Licensee or any Users failing to use the Service for more than six (6) consecutive months, the right, in its sole discretion, to terminate all User Accounts of Licensee and terminate Licensees and all Licensees Users access to and use of the Service without notice. Upon expiration or termination of an Order or these Terms, Licensee shall immediately be unable access and use the Service, all Licensee Content may be deleted from the Service at Codefreshs sole discretion (such information cannot be recovered once Licensees Account or any User Account is terminated) and Licensee shall return or destroy all copies of all Codefresh Content and all portions thereof in Licensees possession and so certify to Codefresh, if such certification is requested by" }, { "data": "Any provision of these Terms that, by its nature and context is intended to survive, including provisions relation to payment of outstanding fees, confidentiality, warranties, and limitation of liability, will survive termination of these Terms. 8.2.Codefresh will promptly terminate without notice the accounts of Users that are determined by Codefresh to be repeat infringer(s). A repeat infringer is a User who has been notified of infringing activity more than twice and/or has had Licensee Content or Non-Codefresh Applications removed from the Service more than twice. Codefresh represents and warrants that the Service will function in substantial compliance with the applicable Documentation. In order to be entitled to any remedy based on a purported breach of the foregoing representation and warranty, Licensee must inform Codefresh of the purported deficiency in the Service within thirty (30) days of the day on which Licensee first becomes aware of the condition giving rise to such claim. EXCEPT AS EXPRESSLY SET FORTH HEREIN, THE SERVICE, SITE, CODEFRESH CONTENT, AND ALL SERVER AND NETWORK COMPONENTS ARE PROVIDED ON AN AS IS AND AS AVAILABLE BASIS. CODEFRESH EXPRESSLY DISCLAIMS ANY AND ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY, TITLE, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. LICENSEE AND USERS ACKNOWLEDGE THAT CODEFRESH DOES NOT WARRANT THAT THE SERVICE WILL BE UNINTERRUPTED, TIMELY, SECURE, ERROR-FREE OR VIRUS-FREE, NOR DOES CODEFRESH MAKE ANY WARRANTY AS TO THE RESULTS THAT MAY BE OBTAINED FROM USE OF THE SERVICE, AND NO INFORMATION, ADVICE, OR SERVICES OBTAINED BY LICENSEE OR USERS FROM CODEFRESH OR THROUGH THE SERVICE SHALL CREATE ANY WARRANTY NOT EXPRESSLY STATED IN THESE TERMS. THE SERVICE MAY BE TEMPORARILY UNAVAILABLE FOR SCHEDULED MAINTENANCE OR FOR UNSCHEDULED EMERGENCY MAINTENANCE, EITHER BY CODEFRESH OR BY THIRD-PARTY PROVIDERS, OR BECAUSE OF CAUSES BEYOND CODEFRESHS REASONABLE CONTROL. CODEFRESH SHALL USE REASONABLE EFFORTS TO PROVIDE ADVANCE NOTICE OF ANY SCHEDULED SERVICE DISRUPTION. 10.1.Licensee will indemnify, defend and hold harmless Codefresh and its officers, directors, employee and agents, from and against any third-party claims, disputes, demands, liabilities, damages, losses, and costs and expenses, including, without limitation, reasonable legal and professional fees, arising out of or in any way connected with (i) Licensees or Users access to or use of the Service that is in violation of law or this Agreement, or (ii) the Licensee Content as provided to Codefresh that is in violation of law or this Agreement, provided that Codefresh: (a) promptly notifies Licensee in writing of the claim; (b) grants Licensee sole control of the defense and settlement of the claim; and (c) provides Licensee, at Licensees expense, with all assistance, information and authority reasonably required for the defense and settlement of the claim. 10.2Codefresh will indemnify, defend and hold harmless Licensee and its officers, directors, employee and agents, from and against any claims, disputes, demands, liabilities, damages, losses, and costs and expenses, including, without limitation, reasonable legal and professional fees, to the extent that it is based upon a third-party claim that the Service, as provided by under this Agreement and used within the scope of this Agreement, infringes or misappropriates any intellectual property right in any jurisdiction, and will pay any costs, damages and reasonable attorneys fees attributable to such claim that are awarded against Licensee, provided that Licensee: (i) promptly notifies Codefresh in writing of the claim; (ii) grants Codefresh sole control of the defense and settlement of the claim; and (iii) provides Codefresh, at Codefreshs expense, with all assistance, information and authority reasonably required for the" }, { "data": "use of any of the Codefresh Content and/or the Service is, or in Codefreshs reasonable opinion is likely to be, the subject of a claim specified in this Section, then Codefresh may, at its sole option and expense: (a) procure for Licensee the right to continue using the Codefresh Content and/or the Service; (b) replace or modify the Codefresh Content and/or the Service so that it is non-infringing while maintaining substantially equivalent in function to the original Codefresh Content and/or the Service; or (c) if options (a) and (b) above cannot be accomplished despite Codefreshs reasonable efforts, then Codefresh or Licensee may terminate this Agreement and Codefresh will provide pro rata refund of unused/unapplied fees paid in advance for any applicable subscription term. THE PROVISIONS OF THIS SECTION 10.2 SET FORTH CODEFRESHS SOLE AND EXCLUSIVE OBLIGATIONS, AND LICENSEES SOLE AND EXCLUSIVE REMEDIES, WITH RESPECT TO INDEMNIFICATION OBLIGATIONS FOR INFRINGEMENT OR MISAPPROPRIATION OF INTELLECTUAL PROPERTY RIGHTS OF ANY KIND. EXCEPT FOR A LIABILITY ARISING FROM SECTION 4.2 OR A PARTYS INDEMNITY OBLIGATIONS SET FORTH IN SECTION 10, EACH PARTYS LIABILITY UNDER THIS AGREEMENT SHALL BE LIMITED TO THE FEES PAID OR PAYABLE BY LICENSEE TO CODEFRESH IN THE TWELVE (12) MONTHS PRECEDING THE EVENT GIVING RISE TO THE LIABILITY. THE PROVISIONS OF THIS SECTION SHALL APPLY WHETHER OR NOT THE LICENSEE HAS BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGE, AND EVEN IF AN EXCLUSIVE REMEDY SET FORTH HEREIN IS FOUND TO HAVE FAILED OF ITS ESSENTIAL PURPOSE. UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY (WHETHER IN CONTRACT, TORT OR OTHERWISE) SHALL EITHER PARTY BE LIABLE TO THE OTHER PARTY, ANY USER, OR ANY THIRD-PARTY FOR ANY INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, CONSEQUENTIAL OR PUNITIVE DAMAGES, INCLUDING LOST PROFITS, LOST SALES OR BUSINESS, LOST DATA, GOODWILL, OR OTHER INTANGIBLE LOSSES. THE PARTIES HAVE RELIED ON THESE LIMITATIONS IN DETERMINING WHETHER TO ENTER INTO THESE TERMS. IF APPLICABLE LAW DOES NOT ALLOW FOR CERTAIN LIMITATIONS OF LIABILITY, SUCH AS THE EXCLUSION OF IMPLIED WARRANTEES OR LIMITATION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES, THE PROVISIONS OF THIS SECTION 11 SHALL APPLY TO THE MAXIMUM EXTENT ALLOWABLE UNDER SUCH APPLICABLE LAW. 12.1.Intellectual Property Rights. Subject to the limited licenses expressly granted hereunder, Codefresh and its licensors reserve their respective right, title, and interest in and to the Service, including all of Codefreshs and its licensors related intellectual property rights related to the Service (the Intellectual Property Rights). No rights are granted to Licensee hereunder other than as expressly set forth herein. Codefresh shall retain all right, title, and interest in and to Intellectual Property Rights, including any Software, all improvements, enhancements, modifications, and derivative works thereof, and all related Intellectual Property Rights. 12.2. License to Host Licensee Content. Licensee hereby grants Codefresh a worldwide, non-exclusive, royalty-free, fully paid, sublicensable, limited-term license to host, copy, transmit and display Licensee Content that Licensee or any User posts to the Service, solely as necessary for Codefresh to provide the Service in accordance with these Terms. Subject to the limited licenses granted herein, Codefresh acquires no right, title or interest from Licensee or Licensees licensors under these Terms in or to Licensee Content. 12.3.License to Use Feedback. Licensee hereby grants to Codefresh a worldwide, perpetual, irrevocable, royalty-free license to use and incorporate into the Service any suggestion, enhancement request, recommendation, correction, or other feedback provided by Licensee or Users relating to the operation of the" }, { "data": "Any technical, financial, business or other information provided by one party (the Disclosing Party) to the other party (the Receiving Party) and designated as confidential or proprietary or that reasonably should be understood to be confidential given the nature of the information and the circumstances of disclosure (Confidential Information) shall be held in confidence and not disclosed and shall not be used except to the extent necessary to carry out the Receiving Partys obligations or express rights hereunder, except as otherwise authorized by the Disclosing Party in writing. For clarity, the Service and Codefresh Content shall be deemed Confidential Information of Codefresh whether or not otherwise designated as such. The Receiving Party shall use the same degree of care that it uses to protect the confidentiality of its own confidential information of like kind (but not less than a reasonable standard of care). These obligations will not apply to information that (i) was previously known by the Receiving Party, as demonstrated by documents or files in existence at the time of disclosure, (ii) is generally and freely publicly available through no fault of the Receiving Party, (iii) the Receiving Party otherwise rightfully obtains from third parties without restriction, or (iv) is independently developed by the Receiving Party without reference to or reliance on the Disclosing Partys Confidential Information, as demonstrated by documents or files in existence at the time of disclosure. The Receiving Party may disclose Confidential Information of the Disclosing Party to the extent compelled by law to do so, provided the Receiving Party gives the Disclosing Party prior notice of the compelled disclosure (to the extent legally permitted) and reasonable assistance, at the Disclosing Partys cost, if the Disclosing Party wishes to contest the disclosure. If the Receiving Party is compelled by law to disclose the Disclosing Partys Confidential Information as part of a civil proceeding to which the Disclosing Party is a party, and the Disclosing Party is not contesting the disclosure, the Disclosing Party will reimburse the Receiving Party for its reasonable cost of compiling and providing secure access to that Confidential Information. In the event that such protective order or other remedy is not obtained, the Receiving Party shall furnish only that portion of the Confidential Information that is legally required and use commercially reasonable efforts to obtain assurance that confidential treatment will be accorded the Confidential Information. 14.1. Codefresh shall maintain administrative, physical, and technical safeguards for protection of the security, confidentiality and integrity of Licensee Content that is Licensees Confidential Information (Confidential Licensee Content). Those safeguards shall include, but will not be limited to, measures for preventing access, use, modification, or disclosure of Confidential Licensee Content by Codefreshs personnel except (a) to provide the Service and prevent or address service or technical problems, (b) as compelled by law in accordance with Section 13 (Confidentiality) above, or (c) as Licensee expressly permits in writing. 14.2. Licensee understands that the operation of the Service, including Licensee Content, may involve (i) transmissions over various networks; (ii) changes to conform and adapt to technical requirements of connecting networks or devices; and (iii) transmission to Codefreshs third-party vendors and hosting partners solely to provide the necessary hardware, software, networking, storage, and related technology required to operate and maintain the Service. Accordingly, Licensee acknowledges that Licensee bears sole responsibility for adequate backup of Licensee Content. Codefresh will have no liability to Licensee for any unauthorized access or use of any of Licensee Content, or any corruption, deletion, destruction, or loss of any of Licensee Content. 15.1.This Agreement and any action related thereto will be governed by the laws of the State of California without regard to its conflict of laws" }, { "data": "Licensee and Codefresh irrevocably consent to the jurisdiction of, and venue in, the state or federal courts located in the County of San Francisco, California for any disputes arising under this Agreement, provided that the foregoing submission to jurisdiction and venue shall in no way limit the obligation to arbitrate disputes set forth in Section 15.2. 15.2.Except for actions to protect a partys intellectual property rights and to enforce an arbitrators decision hereunder, any controversy or claim arising out of or relating to this Agreement, or the breach thereof, shall be settled by arbitration administered by the American Arbitration Association (AAA) under its Commercial Arbitration Rules, or such applicable substantially equivalent rules as the AAA may adopt that are then in effect (the AAA Rules), and judgment on the award rendered by the arbitrator(s) may be entered in any court having jurisdiction thereof. There shall be one arbitrator, and such arbitrator shall be chosen by mutual agreement of the parties in accordance with AAA Rules. The arbitration shall be conducted remotely to the extent practicable and otherwise in San Francisco, California. The arbitrator shall apply the laws of the State of California to all issues in dispute. The controversy or claim shall be arbitrated on an individual basis and shall not be consolidated in any arbitration with any claim or controversy of any other party. The findings of the arbitrator shall be final and binding on the parties and may be entered in any court of competent jurisdiction for enforcement. Enforcements of any award or judgment shall be governed by the Federal Arbitration Act. 16.1.Assignment. Neither party may assign this Agreement without the other partys prior written consent and any attempt to do so will be void, except that either party may assign this Agreement, without the other partys consent, to a successor or acquirer, as the case may be, in connection with a merger, acquisition, sale of all or substantially all of such partys assets or substantially similar transaction, provided, however, that Licensee may not assign this Agreement to a competitor or customer of Codefresh without Codefreshs prior written consent. Subject to the foregoing, this Agreement will bind and benefit the parties and their respective successors and assigns. 16.2.Electronic Signature. The parties consent to using electronic signatures to sign this Agreement and to be legally bound to their electronic signatures. The parties acknowledge that his or her electronic signature will have the same legal force and effect as a handwritten signature. 16.3.Fees. In any action between the parties seeking enforcement of any of the terms and provisions of this Agreement, the prevailing party in such action shall be awarded, in addition to damages, injunctive or other relief, its reasonable costs and expenses, not limited to taxable costs, reasonable attorneys fees, expert fees, and court fees and expenses. 16.4.No Partnership or Joint Venture. The Agreement is not intended to be, and shall not be construed as, an agreement to form a partnership, agency relationship, or a joint venture between the parties. Except as otherwise specifically provided in the Agreement, neither party shall be authorized to act as an agent of or otherwise to represent the other party. 16.5.Headings. Captions to, and headings of, the articles, sections, subsections, paragraphs, or subparagraphs of this Agreement are solely for the convenience of the parties, are not a part of this Agreement, and shall not be used for the interpretation or determination of the validity of this Agreement or any provision hereof. 16.6.Publicity. Licensee grants Codefresh the right to use Licensees company name and logo as a reference for marketing or promotional purposes on Codefreshs website and in other public or private communications with its existing or potential customers, subject to Licensees standard trademark usage guidelines as provided to Codefresh from time-to-time. 16.7.No Election of" }, { "data": "Except as expressly set forth in this Agreement, the exercise by either party of any of its remedies under this Agreement will not be deemed an election of remedies and will be without prejudice to its other remedies under this Agreement or available at law or in equity or otherwise. 16.8.Notices. All notices required or permitted under this Agreement will be in writing, will reference this Agreement, and will be deemed given: (i) when delivered personally; (ii) one (1) business day after deposit with a nationally-recognized express courier, with written confirmation of receipt; (iii) three (3) business days after having been sent by registered or certified mail, return receipt requested, postage prepaid; or (iv) twenty-four (24) hours after having been sent via e-mail to the contact person at the address listed in each Order (or if to Codefresh, at legal@codefresh.io) unless a party notifies the other party in writing of a change to the contact person and/or the contact persons contact information. Email shall not be sufficient for notices of termination or an indemnifiable claim. All such notices will be sent to the addresses set forth above or to such other address as may be specified by either party to the other party in accordance with this Section. 16.9.Waiver & Severability. The failure by either party to enforce any provision of this Agreement will not constitute a waiver of future enforcement of that or any other provision. The waiver of any such right or provision will be effective only if in writing and signed by a duly authorized representative of each party. If any provision of this Agreement is held invalid or unenforceable by a court of competent jurisdiction, the remaining provisions of this Agreement will remain in full force and effect, and the provision affected will be construed so as to be enforceable to the maximum extent permissible by law. 16.10.Entire Agreement. This Agreement, together with the SLA and any subsequently executed Order(s), constitutes the complete and exclusive agreement of the parties with respect to its subject matter and supersedes all prior understandings and agreements, whether written or oral, with respect to its subject matter. Any waiver, modification, or amendment of any provision of this Agreement will be effective only if in writing and signed by the parties hereto. 16.11.Force Majeure. Neither party will be responsible for any failure or delay in its performance under this Agreement due to causes beyond its reasonable control, including, but not limited to, labor disputes, strikes, lockouts, shortages of or inability to obtain labor, energy, raw materials or supplies, war, acts of terror, riot, acts of God or governmental action. 16.12.Counterparts. This Agreement may be executed in counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument. 16.13. Updating Terms. As its business evolves, Codefresh may change these Terms (not including any then-current, active Orders) from time to time. Licensee may review the most current version of these Terms at any time by visiting https://codefresh.io/docs/docs/terms-and-privacy-policy/terms-of-service/ and by visiting the most current versions of the other pages that are referenced in the Agreement. All changes will become effective upon posting of the change. If Client (or any User) accesses or uses the Services after the effective date, that use will constitute Clients acceptance of any revised terms and conditions. Codefresh may change these Terms from time to time by providing Licensee and Users at least thirty (30) days notice either by emailing the email address associated with Licensees or Users account or by posting a notice on the Service. Terms last updated May" } ]
{ "category": "App Definition and Development", "file_name": "docs.html.md", "project_name": "Concourse", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "| 0 | 1 | |--:|:-| | 1.1 | Getting Started | | 1.2 | Install | | 1.3 | Auth & Teams | | 1.4 | The fly CLI | | 1.5 | Config Basics | | 1.6 | Pipelines | | 1.7 | Vars | | 1.8 | Resources | | 1.9 | Resource Types | | 1.1 | Jobs | | 1.11 | Steps | | 1.12 | Tasks | | 1.13 | Builds | | 1.14 | How-To Guides | | 1.15 | Operation | | 1.16 | Observation | | 1.17 | Internals | Concourse is a pipeline-based continuous thing-doer. The word \"pipeline\" is all the rage in CI these days, so being more specific about this term is kind of important; Concourse's pipelines are significantly different from the rest. Pipelines are built around Resources, which represent all external state, and Jobs, which interact with them. Concourse pipelines represent a dependency flow, kind of like distributed Makefiles. Pipelines are designed to be self-contained so as to minimize server-wide configuration. Maximizing portability also mitigates risk, making it easier for projects to recover from CI disasters. Resources like the git resource and s3 resource are used to express source code, dependencies, deployments, and any other external state. This interface is also used to model more abstract things like scheduled or interval triggers, via the time resource. Resource Types are defined as part of the pipeline itself, making the pipelines more self-contained and keeping Concourse itself small and generic without resorting to a complicated plugin system. Jobs are sequences of get, put, and task steps to execute. These steps determine the job's inputs and outputs. Jobs are designed to be idempotent and loosely coupled, allowing the pipeline to grow with the project's needs without requiring engineers to keep too much in their head at a time. Everything in Concourse runs in a container. Instead of modifying workers to install build tools, Tasks describe their own container image (typically using Docker images via the registry-image resource). Concourse admittedly has a steeper learning curve at first, and depending on your background it might be a lot to take in. A core goal of this project is for the curve to flatten out shortly after and result in higher productivity and less stress over time. If this all sounds like gobbeldigook, that's OK - you may want to just continue on, start kicking the tires a bit, and use the above as a quick reference of the \"big picture\" as the mental model sets in." } ]
{ "category": "App Definition and Development", "file_name": "install-devtron-with-cicd-with-gitops.md", "project_name": "Devtron", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "In this section, we describe the steps in detail on how you can install Devtron with CI/CD by enabling GitOps during the installation. Install Helm if you have not installed it. Run the following command to install the latest version of Devtron with CI/CD along with GitOps (Argo CD) module: ``` helm repo add devtron https://helm.devtron.ai helm repo update devtron helm install devtron devtron/devtron-operator \\ --create-namespace --namespace devtroncd \\ --set installer.modules={cicd} \\ --set argo-cd.enabled=true``` Note: If you want to configure Blob Storage during the installation, refer configure blob storage duing installation. To install Devtron on clusters with the multi-architecture nodes (ARM and AMD), append the Devtron installation command with --set installer.arch=multi-arch. Note: If you want to install Devtron for production deployments, please refer to our recommended overrides for Devtron Installation. Configuring Blob Storage in your Devtron environment allows you to store build logs and cache. In case, if you do not configure the Blob Storage, then: You will not be able to access the build and deployment logs after an hour. Build time for commit hash takes longer as cache is not available. Artifact reports cannot be generated in pre/post build and deployment stages. Choose one of the options to configure blob storage: Run the following command to install Devtron along with MinIO for storing logs and cache. ``` helm repo add devtron https://helm.devtron.ai helm repo update devtron helm install devtron devtron/devtron-operator \\ --create-namespace --namespace devtroncd \\ --set installer.modules={cicd} \\ --set minio.enabled=true \\ --set argo-cd.enabled=true``` Note: Unlike global cloud providers such as AWS S3 Bucket, Azure Blob Storage and Google Cloud Storage, MinIO can be hosted locally also. Refer to the AWS specific parameters on the Storage for Logs and Cache page. Run the following command to install Devtron along with AWS S3 buckets for storing build logs and cache: Install using S3 IAM policy. Note: Please ensure that S3 permission policy to the IAM role attached to the nodes of the cluster if you are using below command. ``` helm repo add devtron https://helm.devtron.ai helm repo update devtron helm install devtron devtron/devtron-operator \\ --create-namespace --namespace devtroncd \\ --set installer.modules={cicd} \\ --set configs.BLOBSTORAGEPROVIDER=S3 \\ --set configs.DEFAULTCACHEBUCKET=demo-s3-bucket \\ --set configs.DEFAULTCACHEBUCKET_REGION=us-east-1 \\ --set configs.DEFAULTBUILDLOGS_BUCKET=demo-s3-bucket \\ --set configs.DEFAULTCDLOGSBUCKETREGION=us-east-1 \\ --set argo-cd.enabled=true``` Install using access-key and secret-key for AWS S3 authentication: ``` helm repo add devtron https://helm.devtron.ai helm repo update devtron helm install devtron devtron/devtron-operator \\ --create-namespace --namespace devtroncd \\ --set installer.modules={cicd} \\ --set configs.BLOBSTORAGEPROVIDER=S3 \\ --set configs.DEFAULTCACHEBUCKET=demo-s3-bucket \\ --set configs.DEFAULTCACHEBUCKET_REGION=us-east-1 \\ --set configs.DEFAULTBUILDLOGS_BUCKET=demo-s3-bucket \\ --set configs.DEFAULTCDLOGSBUCKETREGION=us-east-1 \\ --set secrets.BLOBSTORAGES3ACCESSKEY=<access-key> \\ --set secrets.BLOBSTORAGES3SECRETKEY=<secret-key> \\ --set argo-cd.enabled=true``` Install using S3 compatible storages: ``` helm repo add devtron https://helm.devtron.ai helm repo update devtron helm install devtron devtron/devtron-operator \\ --create-namespace --namespace devtroncd \\ --set installer.modules={cicd} \\ --set configs.BLOBSTORAGEPROVIDER=S3 \\ --set configs.DEFAULTCACHEBUCKET=demo-s3-bucket \\ --set configs.DEFAULTCACHEBUCKET_REGION=us-east-1 \\ --set configs.DEFAULTBUILDLOGS_BUCKET=demo-s3-bucket \\ --set configs.DEFAULTCDLOGSBUCKETREGION=us-east-1 \\ --set secrets.BLOBSTORAGES3ACCESSKEY=<access-key> \\ --set secrets.BLOBSTORAGES3SECRETKEY=<secret-key> \\ --set configs.BLOBSTORAGES3_ENDPOINT=<endpoint> \\ --set argo-cd.enabled=true``` Refer to the Azure specific parameters on the Storage for Logs and Cache page. Run the following command to install Devtron along with Azure Blob Storage for storing build logs and cache: ``` helm repo add devtron https://helm.devtron.ai helm repo update devtron helm install devtron devtron/devtron-operator \\ --create-namespace --namespace devtroncd \\ --set" }, { "data": "\\ --set secrets.AZUREACCOUNTKEY=xxxxxxxxxx \\ --set configs.BLOBSTORAGEPROVIDER=AZURE \\ --set configs.AZUREACCOUNTNAME=test-account \\ --set configs.AZUREBLOBCONTAINERCILOG=ci-log-container \\ --set configs.AZUREBLOBCONTAINERCICACHE=ci-cache-container \\ --set argo-cd.enabled=true``` Refer to the Google Cloud specific parameters on the Storage for Logs and Cache page. Run the following command to install Devtron along with Google Cloud Storage for storing build logs and cache: ``` helm repo add devtron https://helm.devtron.ai helm repo update devtron helm install devtron devtron/devtron-operator \\ --create-namespace --namespace devtroncd \\ --set installer.modules={cicd} \\ --set configs.BLOBSTORAGEPROVIDER=GCP \\ --set secrets.BLOBSTORAGEGCPCREDENTIALSJSON=eyJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsInByb2plY3RfaWQiOiAiPHlvdXItcHJvamVjdC1pZD4iLCJwcml2YXRlX2tleV9pZCI6ICI8eW91ci1wcml2YXRlLWtleS1pZD4iLCJwcml2YXRlX2tleSI6ICI8eW91ci1wcml2YXRlLWtleT4iLCJjbGllbnRfZW1haWwiOiAiPHlvdXItY2xpZW50LWVtYWlsPiIsImNsaWVudF9pZCI6ICI8eW91ci1jbGllbnQtaWQ+IiwiYXV0aF91cmkiOiAiaHR0cHM6Ly9hY2NvdW50cy5nb29nbGUuY29tL28vb2F1dGgyL2F1dGgiLCJ0b2tlbl91cmkiOiAiaHR0cHM6Ly9vYXV0aDIuZ29vZ2xlYXBpcy5jb20vdG9rZW4iLCJhdXRoX3Byb3ZpZGVyX3g1MDlfY2VydF91cmwiOiAiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vb2F1dGgyL3YxL2NlcnRzIiwiY2xpZW50X3g1MDlfY2VydF91cmwiOiAiPHlvdXItY2xpZW50LWNlcnQtdXJsPiJ9Cg== \\ --set configs.DEFAULTCACHEBUCKET=cache-bucket \\ --set configs.DEFAULTBUILDLOGS_BUCKET=log-bucket \\ --set argo-cd.enabled=true``` Note: The installation takes about 15 to 20 minutes to spin up all of the Devtron microservices one by one. Run the following command to check the status of the installation: ``` kubectl -n devtroncd get installers installer-devtron \\ -o jsonpath='{.status.sync.status}'``` The command executes with one of the following output messages, indicating the status of the installation: | Status | Description | |:--|:| | Downloaded | The installer has downloaded all the manifests, and the installation is in progress. | | Applied | The installer has successfully applied all the manifests, and the installation is completed. | Downloaded The installer has downloaded all the manifests, and the installation is in progress. Applied The installer has successfully applied all the manifests, and the installation is completed. Run the following command to check the installer logs: ``` kubectl logs -f -l app=inception -n devtroncd``` Run the following command to get the Devtron dashboard URL: ``` kubectl get svc -n devtroncd devtron-service \\ -o jsonpath='{.status.loadBalancer.ingress}'``` You will get an output similar to the example shown below: ``` [map[hostname:aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com]]``` Use the hostname aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com (Loadbalancer URL) to access the Devtron dashboard. Note: If you do not get a hostname or receive a message that says \"service doesn't exist,\" it means Devtron is still installing. Please wait until the installation is completed. Note: You can also use a CNAME entry corresponding to your domain/subdomain to point to the Loadbalancer URL to access at a customized domain. | Host | Type | Points to | |:--|:-|:--| | devtron.yourdomain.com | CNAME | aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com | devtron.yourdomain.com CNAME aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com When you install Devtron for the first time, it creates a default admin user and password (with unrestricted access to Devtron). You can use that credentials to log in as an administrator. After the initial login, we recommend you set up any SSO service like Google, GitHub, etc., and then add other users (including yourself). Subsequently, all the users can use the same SSO (let's say, GitHub) to log in to Devtron's dashboard. The section below will help you understand the process of getting the administrator credentials. Username: admin Password: Run the following command to get the admin password: ``` kubectl -n devtroncd get secret devtron-secret \\ -o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d``` Username: admin Password: Run the following command to get the admin password: ``` kubectl -n devtroncd get secret devtron-secret \\ -o jsonpath='{.data.ACD_PASSWORD}' | base64 -d``` If you want to uninstall Devtron or clean Devtron helm installer, refer our uninstall Devtron. Related to installation, please also refer FAQ section also. Last updated 3 months ago Was this helpful? Note: If you have questions, please let us know on our discord channel." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Devtron", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Devtron is a tool integration platform for Kubernetes. Devtron deeply integrates with products across the lifecycle of microservices i.e., CI/CD, security, cost, debugging, and observability via an intuitive web interface. Devtron helps you to deploy, observe, manage & debug the existing Helm apps in all your clusters. Workflow which understands the domain of Kubernetes, testing, CD, SecOps so that you don't have to write scripts Reusable and composable components so that workflows are easy to construct and reason through Deploy to multiple Kubernetes clusters on multiple cloud/on-prem from one Devtron setup Works for all cloud providers and on-premise Kubernetes clusters Multi-level security policy at global, cluster, environment, and application-level for efficient hierarchical policy management Behavior-driven security policy Define policies and exceptions for Kubernetes resources Define policies for events for faster resolution One place for all historical Kubernetes events Access all manifests securely, such as secret obfuscation Application metrics for CPU, RAM, HTTP status code, and latency with a comparison between new and old Advanced logging with grep and JSON search Intelligent correlation between events, logs for faster triangulation of issue Auto issue identification Fine-grained access control; control who can edit the configuration and who can deploy. Audit log to know who did what and when History of all CI and CD events Kubernetes events impacting application Relevant cloud events and their impact on applications Advanced workflow policies like blackout window, branch environment relationship to secure build and deployment pipelines GitOps exposed through API and UI so that you don't have to interact with git CLI GitOps backed by Postgres for easy analysis Enforce finer access control than Git Deployment metrics to measure the success of the agile process. It captures MTTR, change failure rate, deployment frequency, and deployment size out of the box. Audit log to understand the failure causes Monitor changes across deployments and reverts easily Devtron uses a modified version of Argo Rollout. Application metrics only work for K8s version 1.16+ Check out our contributing guidelines. Directions for opening issues, coding standards, and notes on our development processes are all included. Get updates on Devtron's development and chat with the project maintainers, contributors, and community members. Join the Discord Community Follow @DevtronL on Twitter Raise feature requests, suggest enhancements, report bugs at GitHub issues Read the Devtron blog We, at Devtron, take security and our users' trust very seriously. If you believe you have found a security issue in Devtron, please responsibly disclose it by contacting us at security@devtron.ai. Last updated 3 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide walks you through setting up Flagger on Alibaba ServiceMesh. Created an ACK(Alibabacloud Container Service for Kubernetes) cluster instance. Create an ASM(Alibaba ServiceMesh) enterprise instance and add ACK cluster. $ACK_CONFIG: the kubeconfig file path of ACK, which be treated as$HOME/.kube/config in the rest of guide. $MESH_CONFIG: the kubeconfig file path of ASM. In the Alibaba Cloud Service Mesh (ASM) console, on the basic information page, make sure Data-plane KubeAPI access is enabled. When enabled, the Istio resources of the control plane can be managed through the Kubeconfig of the data plane cluster. In the Alibaba Cloud Service Mesh (ASM) console, click Settings to enable the collection of Prometheus monitoring metrics. You can use the self-built Prometheus monitoring, or you can use the Alibaba Cloud ARMS Prometheus monitoring plug-in that has joined the ACK cluster, and use ARMS Prometheus to collect monitoring indicators. Add Flagger Helm repository: ``` helm repo add flagger https://flagger.app``` Install Flagger's Canary CRD: ``` kubectl apply -f https://raw.githubusercontent.com/fluxcd/flagger/v1.21.0/artifacts/flagger/crd.yaml``` In the Alibaba Cloud Service Mesh (ASM) console, click Cluster & Workload Management, select the Kubernetes cluster, select the target ACK cluster, and add it to ASM. If you are using Alibaba Cloud Container Service for Kubernetes (ACK) ARMS Prometheus monitoring, replace {Region-ID} in the link below with your region ID, such as cn-hangzhou. {ACKID} is the ACK ID of the data plane cluster that you added to Alibaba Cloud Service Mesh (ASM). Visit the following links to query the public and intranet addresses monitored by ACK's ARMS Prometheus: https://arms.console.aliyun.com/#/promDetail/{Region-ID}/{ACK-ID}/setting An example of an intranet address is as follows: http://{Region-ID}-intranet.arms.aliyuncs.com:9090/api/v1/prometheus/{Prometheus-ID}/{u-id}/{ACK-ID}/{Region-ID} Replace the value of metricsServer with your Prometheus address. ``` helm upgrade -i flagger flagger/flagger \\ --namespace=istio-system \\ --set crd.create=false \\ --set meshProvider=istio \\ --set metricsServer=http://prometheus:9090``` Last updated 1 year ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "alerting.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Flagger is a progressive delivery Kubernetes operator Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes. It reduces the risk of introducing a new software version in production by gradually shifting traffic to the new version while measuring metrics and running conformance tests. Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring) using a service mesh (App Mesh, Istio, Linkerd, Kuma, Open Service Mesh) or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik, APISIX) for traffic routing. For release analysis, Flagger can query Prometheus, InfluxDB, Datadog, New Relic, CloudWatch, Stackdriver or Graphite and for alerting it uses Slack, MS Teams, Discord and Rocket. Flagger can be configured with Kubernetes custom resources and is compatible with any CI/CD solutions made for Kubernetes. Since Flagger is declarative and reacts to Kubernetes events, it can be used in GitOps pipelines together with tools like Flux, JenkinsX, Carvel, Argo, etc. Flagger is a Cloud Native Computing Foundation project and part of Flux family of GitOps tools. To get started with Flagger, choose one of the supported routing providers and install Flagger with Helm or Kustomize. After installing Flagger, you can follow one of these tutorials to get started: Service mesh tutorials Istio Linkerd AWS App Mesh AWS App Mesh: Canary Deployment Using Flagger Open Service Mesh Kuma Ingress controller tutorials Contour Gloo NGINX Ingress Skipper Ingress Traefik Apache APISIX Hands-on GitOps workshops Istio Linkerd AWS App Mesh The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page. Last updated 4 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "apisix-progressive-delivery.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to use the Apache APISIX and Flagger to automate canary deployments. Flagger requires a Kubernetes cluster v1.19 or newer and Apache APISIX v2.15 or newer and Apache APISIX Ingress Controller v1.5.0 or newer. Install Apache APISIX and Apache APISIX Ingress Controller with Helm v3: ``` helm repo add apisix https://charts.apiseven.com kubectl create ns apisix helm upgrade -i apisix apisix/apisix --version=0.11.3 \\ --namespace apisix \\ --set apisix.podAnnotations.\"prometheus\\.io/scrape\"=true \\ --set apisix.podAnnotations.\"prometheus\\.io/port\"=9091 \\ --set apisix.podAnnotations.\"prometheus\\.io/path\"=/apisix/prometheus/metrics \\ --set pluginAttrs.prometheus.export_addr.ip=0.0.0.0 \\ --set pluginAttrs.prometheus.export_addr.port=9091 \\ --set pluginAttrs.prometheus.export_uri=/apisix/prometheus/metrics \\ --set pluginAttrs.prometheus.metricprefix=apisix \\ --set ingress-controller.enabled=true \\ --set ingress-controller.config.apisix.serviceNamespace=apisix``` Install Flagger and the Prometheus add-on in the same namespace as Apache APISIX: ``` helm repo add flagger https://flagger.app helm upgrade -i flagger flagger/flagger \\ --namespace apisix \\ --set prometheus.install=true \\ --set meshProvider=apisix``` Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services and an ApisixRoute). These objects expose the application outside the cluster and drive the canary analysis and promotion. Create a test namespace: ``` kubectl create ns test``` Create a deployment and a horizontal pod autoscaler: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main``` Deploy the load testing service to generate traffic during the canary analysis: ``` helm upgrade -i flagger-loadtester flagger/loadtester \\ --namespace=test``` Create an Apache APISIX ApisixRoute, Flagger will reference and generate the canary Apache APISIX ApisixRoute (replace app.example.com with your own domain): ``` apiVersion: apisix.apache.org/v2 kind: ApisixRoute metadata: name: podinfo namespace: test spec: http: backends: serviceName: podinfo servicePort: 80 match: hosts: app.example.com methods: GET paths: /* name: method plugins: name: prometheus enable: true config: disable: false prefer_name: true``` Save the above resource as podinfo-apisixroute.yaml and then apply it: ``` kubectl apply -f ./podinfo-apisixroute.yaml``` Create a canary custom resource (replace app.example.com with your own domain): ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: podinfo namespace: test spec: provider: apisix targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo routeRef: apiVersion: apisix.apache.org/v2 kind: ApisixRoute name: podinfo progressDeadlineSeconds: 60 service: port: 80 targetPort: 9898 analysis: interval: 10s threshold: 10 maxWeight: 50 stepWeight: 10 metrics: name: request-success-rate thresholdRange: min: 99 interval: 1m name: request-duration thresholdRange: max: 500 interval: 30s webhooks: name: load-test url: http://flagger-loadtester.test/ timeout: 5s type: rollout metadata: cmd: |- hey -z 1m -q 10 -c 2 -h2 -host app.example.com http://apisix-gateway.apisix/api/info``` Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo horizontalpodautoscaler.autoscaling/podinfo apisixroute/podinfo canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary apisixroute/podinfo-podinfo-canary``` Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams. Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=stefanprodan/podinfo:6.0.1``` Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Conditions: Message: Canary analysis completed successfully, promotion finished. Reason: Succeeded Status: True Type: Promoted Failed Checks: 1 Iterations: 0 Phase: Succeeded Events: Type Reason Age From Message - - - Warning Synced 2m59s flagger podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less than desired generation Warning Synced 2m50s flagger podinfo-primary.test not ready: waiting for rollout to finish: 0 of 1 (readyThreshold 100%) updated replicas are available Normal Synced 2m40s (x3 over 2m59s) flagger all the metrics providers are available! Normal Synced 2m39s flagger Initialization done!" }, { "data": "Normal Synced 2m20s flagger New revision detected! Scaling up podinfo.test Warning Synced 2m (x2 over 2m10s) flagger canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 1 (readyThreshold 100%) updated replicas are available Normal Synced 110s flagger Starting canary analysis for podinfo.test Normal Synced 109s flagger Advance podinfo.test canary weight 10 Warning Synced 100s flagger Halt advancement no values found for apisix metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found Normal Synced 90s flagger Advance podinfo.test canary weight 20 Normal Synced 80s flagger Advance podinfo.test canary weight 30 Normal Synced 69s flagger Advance podinfo.test canary weight 40 Normal Synced 59s flagger Advance podinfo.test canary weight 50 Warning Synced 30s (x2 over 40s) flagger podinfo-primary.test not ready: waiting for rollout to finish: 1 old replicas are pending termination Normal Synced 9s (x3 over 50s) flagger (combined from similar events): Promotion completed! Scaling down podinfo.test``` Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME test podinfo-2 Progressing 10 2022-11-23T05:00:54Z test podinfo Succeeded 0 2022-11-23T06:00:54Z``` During the canary analysis you can generate HTTP 500 errors to test if Flagger pauses and rolls back the faulted version. Trigger another canary deployment: ``` kubectl -n test set image deployment/podinfo \\ podinfod=stefanprodan/podinfo:6.0.2``` Exec into the load tester pod with: ``` kubectl -n test exec -it deploy/flagger-loadtester bash``` Generate HTTP 500 errors: ``` hey -z 1m -c 5 -q 5 -host app.example.com http://apisix-gateway.apisix/status/500``` Generate latency: ``` watch -n 1 curl -H \\\"host: app.example.com\\\" http://apisix-gateway.apisix/delay/1``` When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. ``` kubectl -n apisix logs deploy/flagger -f | jq .msg \"New revision detected! Scaling up podinfo.test\" \"canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 1 (readyThreshold 100%) updated replicas are available\" \"Starting canary analysis for podinfo.test\" \"Advance podinfo.test canary weight 10\" \"Halt podinfo.test advancement success rate 0.00% < 99%\" \"Halt podinfo.test advancement success rate 26.76% < 99%\" \"Halt podinfo.test advancement success rate 34.19% < 99%\" \"Halt podinfo.test advancement success rate 37.32% < 99%\" \"Halt podinfo.test advancement success rate 39.04% < 99%\" \"Halt podinfo.test advancement success rate 40.13% < 99%\" \"Halt podinfo.test advancement success rate 48.28% < 99%\" \"Halt podinfo.test advancement success rate 50.35% < 99%\" \"Halt podinfo.test advancement success rate 56.92% < 99%\" \"Halt podinfo.test advancement success rate 67.70% < 99%\" \"Rolling back podinfo.test failed checks threshold reached 10\" \"Canary failed! Scaling down podinfo.test\"``` The canary analysis can be extended with Prometheus queries. Create a metric template and apply it on the cluster: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: not-found-percentage namespace: test spec: provider: type: prometheus address: http://flagger-prometheus.apisix:9090 query: | sum( rate( apisixhttpstatus{ route=~\"{{ namespace }}{{ route }}-{{ target }}-canary.+\", code!~\"4..\" }[{{ interval }}] ) ) / sum( rate( apisixhttpstatus{ route=~\"{{ namespace }}{{ route }}-{{ target }}-canary.+\" }[{{ interval }}] ) ) * 100``` Edit the canary analysis and add the not found error rate check: ``` analysis: metrics: name: \"404s percentage\" templateRef: name: not-found-percentage thresholdRange: max: 5 interval: 1m``` The above configuration validates the canary by checking if the HTTP 404 req/sec percentage is below 5 percent of the total traffic. If the 404s rate reaches the 5% threshold, then the canary fails. The above procedures can be extended with more custom metrics checks, webhooks, manual promotion approval and Slack or MS Teams notifications. Last updated 1 year ago" } ]
{ "category": "App Definition and Development", "file_name": "appmesh-progressive-delivery.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to use the Skipper ingress controller and Flagger to automate canary deployments. Flagger requires a Kubernetes cluster v1.19 or newer and Skipper ingress v0.13 or newer. Install Skipper ingress-controller using upstream definition. Certain arguments are relevant: ``` -enable-connection-metrics -histogram-metric-buckets=.01,1,10,100 -kubernetes -kubernetes-in-cluster -kubernetes-path-mode=path-prefix -metrics-exp-decay-sample -metrics-flavour=prometheus -route-backend-metrics -route-backend-error-counters -route-response-metrics -serve-host-metrics -serve-route-metrics -whitelisted-healthcheck-cidr=0.0.0.0/0 # permit Kind source health checks``` Install Flagger using kustomize: ``` kustomize build https://github.com/fluxcd/flagger/kustomize/kubernetes | kubectl apply -f -``` Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services and canary ingress). These objects expose the application outside the cluster and drive the canary analysis and promotion. Create a test namespace: ``` kubectl create ns test``` Create a deployment and a horizontal pod autoscaler: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main``` Deploy the load testing service to generate traffic during the canary analysis: ``` helm upgrade -i flagger-loadtester flagger/loadtester \\ --namespace=test``` Create an ingress definition (replace app.example.com with your own domain): ``` apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: podinfo namespace: test labels: app: podinfo annotations: kubernetes.io/ingress.class: \"skipper\" spec: rules: host: \"app.example.com\" http: paths: pathType: Prefix path: \"/\" backend: service: name: podinfo port: number: 80``` Save the above resource as podinfo-ingress.yaml and then apply it: ``` kubectl apply -f ./podinfo-ingress.yaml``` Create a canary custom resource (replace app.example.com with your own domain): ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: podinfo namespace: test spec: provider: skipper targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo ingressRef: apiVersion: networking.k8s.io/v1 kind: Ingress name: podinfo autoscalerRef: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler name: podinfo progressDeadlineSeconds: 60 service: port: 80 targetPort: 9898 analysis: interval: 10s threshold: 10 maxWeight: 50 stepWeight: 5 metrics: name: request-success-rate interval: 1m thresholdRange: min: 99 name: request-duration interval: 1m thresholdRange: max: 500 webhooks: name: gate type: confirm-rollout url: http://flagger-loadtester.test/gate/approve name: acceptance-test type: pre-rollout url: http://flagger-loadtester.test/ timeout: 10s metadata: type: bash cmd: \"curl -sd 'test' http://podinfo-canary/token | grep token\" name: load-test type: rollout url: http://flagger-loadtester.test/ timeout: 5s metadata: type: cmd cmd: \"hey -z 10m -q 10 -c 2 -host app.example.com http://skipper-ingress.kube-system\" logCmdOutput: \"true\"``` Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo horizontalpodautoscaler.autoscaling/podinfo ingress.networking.k8s.io/podinfo-ingress canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary ingress.networking.k8s.io/podinfo-canary``` Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams. Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=stefanprodan/podinfo:4.0.6``` Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Failed Checks: 0 Phase: Succeeded Events: New revision detected! Scaling up podinfo.test Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Advance podinfo.test canary weight 20 Advance podinfo.test canary weight 25 Advance podinfo.test canary weight 30 Advance" }, { "data": "canary weight 35 Advance podinfo.test canary weight 40 Advance podinfo.test canary weight 45 Advance podinfo.test canary weight 50 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Routing all traffic to primary Promotion completed! Scaling down podinfo.test``` Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME test podinfo-2 Progressing 30 2020-08-14T12:32:12Z test podinfo Succeeded 0 2020-08-14T11:23:88Z``` During the canary analysis you can generate HTTP 500 errors to test if Flagger pauses and rolls back the faulted version. Trigger another canary deployment: ``` kubectl -n test set image deployment/podinfo \\ podinfod=stefanprodan/podinfo:4.0.6``` Exec into the load tester pod with: ``` kubectl -n test exec -it deploy/flagger-loadtester bash``` Generate HTTP 500 errors: ``` hey -z 1m -c 5 -q 5 http://app.example.com/status/500``` Generate latency: ``` watch -n 1 curl http://app.example.com/delay/1``` When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. ``` kubectl -n flagger-system logs deploy/flagger -f | jq .msg New revision detected! Scaling up podinfo.test Canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 1 updated replicas are available Starting canary analysis for podinfo.test Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Advance podinfo.test canary weight 20 Halt podinfo.test advancement success rate 53.42% < 99% Halt podinfo.test advancement success rate 53.19% < 99% Halt podinfo.test advancement success rate 48.05% < 99% Rolling back podinfo.test failed checks threshold reached 3 Canary failed! Scaling down podinfo.test``` The canary analysis can be extended with Prometheus queries. Create a metric template and apply it on the cluster: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: latency namespace: test spec: provider: type: prometheus address: http://flagger-prometheus.flagger-system:9090 query: | histogram_quantile(0.99, sum( rate( skipperserveroutedurationseconds_bucket{ route=~\"{{ printf \"kube(ew)?%s%scanary.*%scanary([0-9]+)?\" namespace ingress service }}\", le=\"+Inf\" }[1m] ) ) by (le) )``` Edit the canary analysis and add the latency check: ``` analysis: metrics: name: \"latency\" templateRef: name: latency thresholdRange: max: 0.5 interval: 1m``` The threshold is set to 500ms so if the average request duration in the last minute goes over half a second then the analysis will fail and the canary will not be promoted. Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=stefanprodan/podinfo:4.0.6``` Generate high response latency: ``` watch curl http://app.example.com/delay/2``` Watch Flagger logs: ``` kubectl -n flagger-system logs deployment/flagger -f | jq .msg Starting canary deployment for podinfo.test Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Halt podinfo.test advancement latency 1.20 > 0.5 Halt podinfo.test advancement latency 1.45 > 0.5 Halt podinfo.test advancement latency 1.60 > 0.5 Halt podinfo.test advancement latency 1.69 > 0.5 Halt podinfo.test advancement latency 1.70 > 0.5 Rolling back podinfo.test failed checks threshold reached 5 Canary failed! Scaling down podinfo.test``` If you have alerting configured, Flagger will send a notification with the reason why the canary failed. Last updated 8 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "dev-guide.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This document describes how to build, test and run Flagger from source. Flagger is written in Go and uses Go modules for dependency management. On your dev machine install the following tools: go >= 1.19 git >;= 2.20 bash >= 5.0 make >= 3.81 kubectl >= 1.22 kustomize >= 4.4 helm >= 3.0 docker >= 19.03 You'll also need a Kubernetes cluster for testing Flagger. You can use Minikube, Kind, Docker desktop or any remote cluster (AKS/EKS/GKE/etc) Kubernetes version 1.22 or newer. To start contributing to Flagger, fork the repository on GitHub. Create a dir inside your GOPATH: ``` mkdir -p $GOPATH/src/github.com/fluxcd``` Clone your fork: ``` cd $GOPATH/src/github.com/fluxcd git clone https://github.com/YOUR_USERNAME/flagger cd flagger``` Set Flagger repository as upstream: ``` git remote add upstream https://github.com/fluxcd/flagger.git``` Sync your fork regularly to keep it up-to-date with upstream: ``` git fetch upstream git checkout main git merge upstream/main``` Download Go modules: ``` go mod download``` Build Flagger binary: ``` make build``` Build load tester binary: ``` make loadtester-build``` We require all commits to be signed. By signing off with your signature, you certify that you wrote the patch or otherwise have the right to contribute the material by the rules of the DCO. If your user.name and user.email are configured in your Git config, you can sign your commit automatically with: ``` git commit -s``` Before submitting a PR, make sure your changes are covered by unit tests. If you made changes to go.mod run: ``` go mod tidy``` If you made changes to pkg/apis regenerate Kubernetes client sets with: ``` make codegen``` Run code formatters: ``` go install golang.org/x/tools/cmd/goimports@latest make fmt``` Run unit tests: ``` make test``` If you made changes to pkg/apis regenerate the Kubernetes client sets with: ``` make codegen``` Update the validation spec in artifacts/flagger/crd.yaml and run: ``` make crd``` Note that any change to the CRDs must be accompanied by an update to the Open API schema. Install a service mesh and/or an ingress controller on your cluster and deploy Flagger using one of the install options listed here. If you made changes to the CRDs, apply your local copy with: ``` kubectl apply -f artifacts/flagger/crd.yaml``` Shutdown the Flagger instance installed on your cluster (replace the namespace with your mesh/ingress one): ``` kubectl -n istio-system scale deployment/flagger --replicas=0``` Port forward to your Prometheus instance: ``` kubectl -n istio-system port-forward svc/prometheus 9090:9090``` Run Flagger locally against your remote cluster by specifying a kubeconfig path: ``` go run cmd/flagger/ -kubeconfig=$HOME/.kube/config \\ -log-level=info \\ -mesh-provider=istio \\ -metrics-server=http://localhost:9090``` Another option to manually test your changes is to build and push the image to your container registry: ``` make build docker build -t <YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG> . docker push <YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG>``` Deploy your image on the cluster and scale up Flagger: ``` kubectl -n istio-system set image deployment/flagger flagger=<YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG> kubectl -n istio-system scale deployment/flagger --replicas=1``` Now you can use one of the tutorials to manually test your changes. Flagger end-to-end tests can be run locally with Kubernetes Kind. Create a Kind cluster: ``` kind create cluster``` Build Flagger container image and load it on the cluster: ``` make build docker build -t test/flagger:latest . kind load docker-image test/flagger:latest``` Run the Istio e2e tests: ``` ./test/istio/run.sh``` For each service mesh and ingress controller, there is a dedicated e2e test suite, choose one that matches your changes from this list. When you open a pull request on Flagger repo, the unit and integration tests will be run in CI. Last updated 1 year ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "docs.flagger.app#getting-started.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "As part of the analysis process, Flagger can validate service level objectives (SLOs) like availability, error rate percentage, average response time and any other objective based on app specific metrics. If a drop in performance is noticed during the SLOs analysis, the release will be automatically rolled back with minimum impact to end-users. Flagger comes with two builtin metric checks: HTTP request success rate and duration. ``` analysis: metrics: name: request-success-rate interval: 1m thresholdRange: min: 99 name: request-duration interval: 1m thresholdRange: max: 500``` For each metric you can specify a range of accepted values with thresholdRange and the window size or the time series with interval. The builtin checks are available for every service mesh / ingress controller and are implemented with Prometheus queries. The canary analysis can be extended with custom metric checks. Using a MetricTemplate custom resource, you configure Flagger to connect to a metric provider and run a query that returns a float64 value. The query result is used to validate the canary based on the specified threshold range. ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: my-metric spec: provider: type: # can be prometheus, datadog, etc address: # API URL insecureSkipVerify: # if set to true, disables the TLS cert validation secretRef: name: # name of the secret containing the API credentials query: # metric query``` The following variables are available in query templates: name (canary.metadata.name) namespace (canary.metadata.namespace) target (canary.spec.targetRef.name) service (canary.spec.service.name) ingress (canary.spec.ingresRef.name) interval (canary.spec.analysis.metrics[].interval) variables (canary.spec.analysis.metrics[].templateVariables) A canary analysis metric can reference a template with templateRef: ``` analysis: metrics: name: \"my metric\" templateRef: name: my-metric namespace: flagger thresholdRange: min: 10 max: 1000 interval: 1m``` A canary analysis metric can reference a set of custom variables with templateVariables. These variables will be then injected into the query defined in the referred MetricTemplate object during canary analysis: ``` analysis: metrics: name: \"my metric\" templateRef: name: my-metric namespace: flagger thresholdRange: min: 10 max: 1000 interval: 1m templateVariables: direction: inbound``` ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: my-metric spec: provider: type: prometheus address: http://prometheus.linkerd-viz:9090 query: | histogram_quantile( 0.99, sum( rate( responselatencyms_bucket{ namespace=\"{{ namespace }}\", deployment=~\"{{ target }}\", direction=\"{{ variables.direction }}\" }[{{ interval }}] ) ) by (le) )``` You can create custom metric checks targeting a Prometheus server by setting the provider type to prometheus and writing the query in PromQL. Prometheus template example: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: not-found-percentage namespace: istio-system spec: provider: type: prometheus address: http://prometheus.istio-system:9090 query: | 100 - sum( rate( istiorequeststotal{ reporter=\"destination\", destinationworkloadnamespace=\"{{ namespace }}\", destination_workload=\"{{ target }}\", response_code!=\"404\" }[{{ interval }}] ) ) / sum( rate( istiorequeststotal{ reporter=\"destination\", destinationworkloadnamespace=\"{{ namespace }}\", destination_workload=\"{{ target }}\" }[{{ interval }}] ) ) * 100``` Reference the template in the canary analysis: ``` analysis: metrics: name: \"404s percentage\" templateRef: name: not-found-percentage namespace: istio-system thresholdRange: max: 5 interval: 1m``` The above configuration validates the canary by checking if the HTTP 404 req/sec percentage is below 5 percent of the total traffic. If the 404s rate reaches the 5% threshold, then the canary fails. Prometheus gRPC error rate example: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: grpc-error-rate-percentage namespace: flagger spec: provider: type: prometheus address:" }, { "data": "query: | 100 - sum( rate( grpcserverhandled_total{ grpc_code!=\"OK\", kubernetes_namespace=\"{{ namespace }}\", kubernetespodname=~\"{{ target }}-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)\" }[{{ interval }}] ) ) / sum( rate( grpcserverstarted_total{ kubernetes_namespace=\"{{ namespace }}\", kubernetespodname=~\"{{ target }}-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)\" }[{{ interval }}] ) ) * 100``` The above template is for gRPC services instrumented with go-grpc-prometheus. If your Prometheus API requires basic authentication, you can create a secret in the same namespace as the MetricTemplate with the basic-auth credentials: ``` apiVersion: v1 kind: Secret metadata: name: prom-auth namespace: flagger data: username: your-user password: your-password``` or if you require bearer token authentication (via a SA token): ``` apiVersion: v1 kind: Secret metadata: name: prom-auth namespace: flagger data: token: ey1234...``` Then reference the secret in the MetricTemplate: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: my-metric namespace: flagger spec: provider: type: prometheus address: http://prometheus.monitoring:9090 secretRef: name: prom-auth``` You can create custom metric checks using the Datadog provider. Create a secret with your Datadog API credentials: ``` apiVersion: v1 kind: Secret metadata: name: datadog namespace: istio-system data: datadogapikey: your-datadog-api-key datadogapplicationkey: your-datadog-application-key``` Datadog template example: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: not-found-percentage namespace: istio-system spec: provider: type: datadog address: https://api.datadoghq.com secretRef: name: datadog query: | 100 - ( sum:istio.mesh.request.count{ reporter:destination, destinationworkloadnamespace:{{ namespace }}, destination_workload:{{ target }}, !response_code:404 }.as_count() / sum:istio.mesh.request.count{ reporter:destination, destinationworkloadnamespace:{{ namespace }}, destination_workload:{{ target }} }.as_count() ) * 100``` Reference the template in the canary analysis: ``` analysis: metrics: name: \"404s percentage\" templateRef: name: not-found-percentage namespace: istio-system thresholdRange: max: 5 interval: 1m``` You can create custom metric checks using the CloudWatch metrics provider. CloudWatch template example: ``` apiVersion: flagger.app/v1alpha1 kind: MetricTemplate metadata: name: cloudwatch-error-rate spec: provider: type: cloudwatch region: ap-northeast-1 # specify the region of your metrics query: | [ { \"Id\": \"e1\", \"Expression\": \"m1 / m2\", \"Label\": \"ErrorRate\" }, { \"Id\": \"m1\", \"MetricStat\": { \"Metric\": { \"Namespace\": \"MyKubernetesCluster\", \"MetricName\": \"ErrorCount\", \"Dimensions\": [ { \"Name\": \"appName\", \"Value\": \"{{ name }}.{{ namespace }}\" } ] }, \"Period\": 60, \"Stat\": \"Sum\", \"Unit\": \"Count\" }, \"ReturnData\": false }, { \"Id\": \"m2\", \"MetricStat\": { \"Metric\": { \"Namespace\": \"MyKubernetesCluster\", \"MetricName\": \"RequestCount\", \"Dimensions\": [ { \"Name\": \"appName\", \"Value\": \"{{ name }}.{{ namespace }}\" } ] }, \"Period\": 60, \"Stat\": \"Sum\", \"Unit\": \"Count\" }, \"ReturnData\": false } ]``` The query format documentation can be found here. Reference the template in the canary analysis: ``` analysis: metrics: name: \"app error rate\" templateRef: name: cloudwatch-error-rate thresholdRange: max: 0.1 interval: 1m``` Note that Flagger need AWS IAM permission to perform cloudwatch:GetMetricData to use this provider. You can create custom metric checks using the New Relic provider. Create a secret with your New Relic Insights credentials: ``` apiVersion: v1 kind: Secret metadata: name: newrelic namespace: istio-system data: newrelicaccountid: your-account-id newrelicquerykey: your-insights-query-key``` New Relic template example: ``` apiVersion:" }, { "data": "kind: MetricTemplate metadata: name: newrelic-error-rate namespace: ingress-nginx spec: provider: type: newrelic secretRef: name: newrelic query: | SELECT filter(sum(nginxingresscontroller_requests), WHERE status >= '500') / sum(nginxingresscontroller_requests) * 100 FROM Metric WHERE metricName = 'nginxingresscontroller_requests' AND ingress = '{{ ingress }}' AND namespace = '{{ namespace }}'``` Reference the template in the canary analysis: ``` analysis: metrics: name: \"error rate\" templateRef: name: newrelic-error-rate namespace: ingress-nginx thresholdRange: max: 5 interval: 1m``` You can create custom metric checks using the Graphite provider. Graphite template example: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: graphite-request-success-rate spec: provider: type: graphite address: http://graphite.monitoring query: | target=summarize( asPercent( sumSeries( stats.timers.httpServerRequests.app.{{target}}.exception..method..outcome.{CLIENT_ERROR,INFORMATIONAL,REDIRECTION,SUCCESS}.status..uri..count ), sumSeries( stats.timers.httpServerRequests.app.{{target}}.exception..method..outcome..status..uri.*.count ) ), {{interval}}, 'avg' )``` Reference the template in the canary analysis: ``` analysis: metrics: name: \"success rate\" templateRef: name: graphite-request-success-rate thresholdRange: min: 90 interval: 1min``` If your Graphite API requires basic authentication, you can create a secret in the same namespace as the MetricTemplate with the basic-auth credentials: ``` apiVersion: v1 kind: Secret metadata: name: graphite-basic-auth namespace: flagger data: username: your-user password: your-password``` Then, reference the secret in the MetricTemplate: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: my-metric namespace: flagger spec: provider: type: graphite address: http://graphite.monitoring secretRef: name: graphite-basic-auth``` Enable Workload Identity on your cluster, create a service account key that has read access to the Cloud Monitoring API and then create an IAM policy binding between the GCP service account and the Flagger service account on Kubernetes. You can take a look at this guide Annotate the flagger service account ``` kubectl annotate serviceaccount flagger \\ --namespace <namespace> \\ iam.gke.io/gcp-service-account=<gcp-serviceaccount-name>@<project-id>.iam.gserviceaccount.com``` Alternatively, you can download the json keys and add it to your secret with the key serviceAccountKey (This method is not recommended). Create a secret that contains your project-id (and, if workload identity is not enabled on your cluster, your service account json). ``` kubectl create secret generic gcloud-sa --from-literal=project=<project-id>``` Then reference the secret in the metric template. Note: The particular MQL query used here works if Istio is installed on GKE. ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: bytes-sent namespace: test spec: provider: type: stackdriver secretRef: name: gcloud-sa query: | fetch k8s_container | metric 'istio.io/service/server/response_latencies' | filter (metric.destinationservicename == '{{ service }}-canary' && metric.destinationservicenamespace == '{{ namespace }}') | align delta(1m) | every 1m | group_by [], [valueresponselatencies_percentile: percentile(value.response_latencies, 99)]``` The reference for the query language can be found here The InfluxDB provider uses the flux query language. Create a secret that contains your authentication token that can be found in the InfluxDB UI. ``` kubectl create secret generic influx-token --from-literal=token=<token>``` Then reference the secret in the metric template. Note: The particular MQL query used here works if Istio is installed on GKE. ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: not-found namespace: test spec: provider: type: influxdb secretRef: name: influx-token query: | from(bucket: \"default\") |> range(start: -2h) |> filter(fn: (r) => r[\"measurement\"] == \"istiorequests_total\") |> filter(fn: (r) => r[\" destinationworkloadnamespace\"] == \"{{ namespace }}\") |> filter(fn: (r) => r[\"destination_workload\"] == \"{{ target }}\") |> filter(fn: (r) => r[\"response_code\"] == \"500\") |> count() |> yield(name: \"count\")``` You can create custom metric checks using the Dynatrace provider. Create a secret with your Dynatrace token: ``` apiVersion: v1 kind: Secret metadata: name: dynatrace namespace: istio-system data: dynatrace_token: ZHQwYz...``` Dynatrace metric template example: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: response-time-95pct namespace: istio-system spec: provider: type: dynatrace address: https://xxxxxxxx.live.dynatrace.com secretRef: name: dynatrace query: | builtin:service.response.time:filter(eq(dt.entity.service,SERVICE-ABCDEFG0123456789)):percentile(95)``` Reference the template in the canary analysis: ``` analysis: metrics: name: \"response-time-95pct\" templateRef: name: response-time-95pct namespace: istio-system thresholdRange: max: 1000 interval: 1m``` Last updated 1 year ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "faq.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Flagger can run automated application analysis, promotion and rollback for the following deployment strategies: Canary Release (progressive traffic shifting) Istio, Linkerd, App Mesh, NGINX, Skipper, Contour, Gloo Edge, Traefik, Open Service Mesh, Kuma, Gateway API, Apache APISIX A/B Testing (HTTP headers and cookies traffic routing) Istio, App Mesh, NGINX, Contour, Gloo Edge, Gateway API Blue/Green (traffic switching) Kubernetes CNI, Istio, Linkerd, App Mesh, NGINX, Contour, Gloo Edge, Open Service Mesh, Gateway API Blue/Green Mirroring (traffic shadowing) Istio, Gateway API Canary Release with Session Affinity (progressive traffic shifting combined with cookie based routing) Istio, Gateway API For Canary releases and A/B testing you'll need a Layer 7 traffic management solution like a service mesh or an ingress controller. For Blue/Green deployments no service mesh or ingress controller is required. A canary analysis is triggered by changes in any of the following objects: Deployment PodSpec (container image, command, ports, env, resources, etc) ConfigMaps mounted as volumes or mapped to environment variables Secrets mounted as volumes or mapped to environment variables Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted. The canary analysis runs periodically until it reaches the maximum traffic weight or the failed checks threshold. Spec: ``` analysis: interval: 1m threshold: 10 maxWeight: 50 stepWeight: 2 stepWeightPromotion: 100 skipAnalysis: false``` The above analysis, if it succeeds, will run for 25 minutes while validating the HTTP metrics and webhooks every minute. You can determine the minimum time it takes to validate and promote a canary deployment using this formula: ``` interval * (maxWeight / stepWeight)``` And the time it takes for a canary to be rollback when the metrics or webhook checks are failing: ``` interval * threshold``` When stepWeightPromotion is specified, the promotion phase happens in stages, the traffic is routed back to the primary pods in a progressive manner, the primary weight is increased until it reaches 100%. In emergency cases, you may want to skip the analysis phase and ship changes directly to production. At any time you can set the spec.skipAnalysis: true. When skip analysis is enabled, Flagger checks if the canary deployment is healthy and promotes it without analysing it. If an analysis is underway, Flagger cancels it and runs the" }, { "data": "Gated canary promotion stages: scan for canary deployments check primary and canary deployment status halt advancement if a rolling update is underway halt advancement if pods are unhealthy call confirm-rollout webhooks and check results halt advancement if any hook returns a non HTTP 2xx result call pre-rollout webhooks and check results halt advancement if any hook returns a non HTTP 2xx result increment the failed checks counter increase canary traffic weight percentage from 0% to 2% (step weight) call rollout webhooks and check results check canary HTTP request success rate and latency halt advancement if any metric is under the specified threshold increment the failed checks counter check if the number of failed checks reached the threshold route all traffic to primary scale to zero the canary deployment and mark it as failed call post-rollout webhooks post the analysis result to Slack wait for the canary deployment to be updated and start over increase canary traffic weight by 2% (step weight) till it reaches 50% (max weight) halt advancement if any webhook call fails halt advancement while canary request success rate is under the threshold halt advancement while canary request duration P99 is over the threshold halt advancement while any custom metric check fails halt advancement if the primary or canary deployment becomes unhealthy halt advancement while canary deployment is being scaled up/down by HPA call confirm-promotion webhooks and check results halt advancement if any hook returns a non HTTP 2xx result promote canary to primary copy ConfigMaps and Secrets from canary to primary copy canary deployment spec template over primary wait for primary rolling update to finish halt advancement if pods are unhealthy route all traffic to primary scale to zero the canary deployment mark rollout as finished call post-rollout webhooks send notification with the canary analysis result wait for the canary deployment to be updated and start over By default Flagger uses linear weight values for the promotion, with the start value, the step and the maximum weight value in 0 to 100 range. Example: ``` spec: analysis: maxWeight: 50 stepWeight: 20``` This configuration performs analysis starting from 20, increasing by 20 until weight goes above 50. We would have steps (canary weight : primary weight): 20 (20 : 80) 40 (40 : 60) 60 (60 : 40) promotion In order to enable non-linear promotion a new parameter was introduced: stepWeights - determines the ordered array of weights, which shall be used during canary promotion. Example: ``` spec: analysis: stepWeights: [1, 2, 10, 80]``` This configuration performs analysis starting from 1, going through stepWeights values till 80. We would have steps (canary weight : primary weight): 1 (1 : 99) 2 (2 : 98) 10 (10 : 90) 80 (20 : 60) promotion For frontend applications that require session affinity you should use HTTP headers or cookies match conditions to ensure a set of users will stay on the same version for the whole duration of the canary analysis. You can enable A/B testing by specifying the HTTP match conditions and the number of iterations. If Flagger finds a HTTP match condition, it will ignore the maxWeight and stepWeight settings. Istio example: ``` analysis: interval: 1m iterations: 10 threshold: 2 match: headers: x-canary: regex: \".insider.\" headers: cookie: regex: \"^(.?;)?(canary=always)(;.)?$\"``` The above configuration will run an analysis for ten minutes targeting the Safari users and those that have a test cookie. You can determine the minimum time that it takes to validate and promote a canary deployment using this formula: ``` interval * iterations``` And the time it takes for a canary to be rollback when the metrics or webhook checks are failing: ``` interval * threshold``` Istio example: ``` analysis: interval: 1m threshold: 10 iterations: 2 match: headers: x-canary: exact: \"insider\" headers: cookie: regex: \"^(.?;)?(canary=always)(;.)?$\" sourceLabels:" }, { "data": "\"scheduler\"``` The header keys must be lowercase and use hyphen as the separator. Header values are case-sensitive and formatted as follows: exact: \"value\" for exact string match prefix: \"value\" for prefix-based match suffix: \"value\" for suffix-based match regex: \"value\" for RE2 style regex-based match Note that the sourceLabels match conditions are applicable only when the mesh gateway is included in the canary.service.gateways list. App Mesh example: ``` analysis: interval: 1m threshold: 10 iterations: 2 match: headers: user-agent: regex: \".Chrome.\"``` Note that App Mesh supports a single condition. Contour example: ``` analysis: interval: 1m threshold: 10 iterations: 2 match: headers: user-agent: prefix: \"Chrome\"``` Note that Contour does not support regex, you can use prefix, suffix or exact. NGINX example: ``` analysis: interval: 1m threshold: 10 iterations: 2 match: headers: x-canary: exact: \"insider\" headers: cookie: exact: \"canary\"``` Note that the NGINX ingress controller supports only exact matching for cookies names where the value must be set to always. Starting with NGINX ingress v0.31, regex matching is supported for header values. The above configurations will route users with the x-canary header or canary cookie to the canary instance during analysis: ``` curl -H 'X-Canary: insider' http://app.example.com curl -b 'canary=always' http://app.example.com``` For applications that are not deployed on a service mesh, Flagger can orchestrate blue/green style deployments with Kubernetes L4 networking. When using Istio you have the option to mirror traffic between blue and green. You can use the blue/green deployment strategy by replacing stepWeight/maxWeight with iterations in the analysis spec: ``` analysis: interval: 1m iterations: 10 threshold: 2``` With the above configuration Flagger will run conformance and load tests on the canary pods for ten minutes. If the metrics analysis succeeds, live traffic will be switched from the old version to the new one when the canary is promoted. The blue/green deployment strategy is supported for all service mesh providers. Blue/Green rollout steps for service mesh: detect new revision (deployment spec, secrets or configmaps changes) scale up the canary (green) run conformance tests for the canary pods run load tests and metric checks for the canary pods every minute abort the canary release if the failure threshold is reached route traffic to canary (This doesn't happen when using the kubernetes provider) promote canary spec over primary (blue) wait for primary rollout route traffic to primary scale down canary After the analysis finishes, the traffic is routed to the canary (green) before triggering the primary (blue) rolling update, this ensures a smooth transition to the new version avoiding dropping in-flight requests during the Kubernetes deployment rollout. Traffic Mirroring is a pre-stage in a Canary (progressive traffic shifting) or Blue/Green deployment strategy. Traffic mirroring will copy each incoming request, sending one request to the primary and one to the canary service. The response from the primary is sent back to the user. The response from the canary is discarded. Metrics are collected on both requests so that the deployment will only proceed if the canary metrics are healthy. Mirroring should be used for requests that are idempotent or capable of being processed twice (once by the primary and once by the canary). Reads are" }, { "data": "Before using mirroring on requests that may be writes, you should consider what will happen if a write is duplicated and handled by the primary and canary. To use mirroring, set spec.analysis.mirror to true. ``` analysis: interval: 1m iterations: 10 threshold: 2 mirror: true mirrorWeight: 100``` Mirroring rollout steps for service mesh: detect new revision (deployment spec, secrets or configmaps changes) scale from zero the canary deployment wait for the HPA to set the canary minimum replicas check canary pods health run the acceptance tests abort the canary release if tests fail start the load tests mirror 100% of the traffic from primary to canary check request success rate and request duration every minute abort the canary release if the failure threshold is reached stop traffic mirroring after the number of iterations is reached route live traffic to the canary pods promote the canary (update the primary secrets, configmaps and deployment spec) wait for the primary deployment rollout to finish wait for the HPA to set the primary minimum replicas check primary pods health switch live traffic back to primary scale to zero the canary send notification with the canary analysis result After the analysis finishes, the traffic is routed to the canary (green) before triggering the primary (blue) rolling update, this ensures a smooth transition to the new version avoiding dropping in-flight requests during the Kubernetes deployment rollout. This deployment strategy mixes a Canary Release with A/B testing. A Canary Release is helpful when we're trying to expose new features to users progressively, but because of the very nature of its routing (weight based), users can land on the application's old version even after they have been routed to the new version previously. This can be annoying, or worse break how other services interact with our application. To address this issue, we borrow some things from A/B testing. Since A/B testing is particularly helpful for applications that require session affinity, we integrate cookie based routing with regular weight based routing. This means once a user is exposed to the new version of our application (based on the traffic weights), they're always routed to that version, i.e. they're never routed back to the old version of our application. You can enable this, by specifying .spec.analsyis.sessionAffinity in the Canary: ``` analysis: interval: 1m threshold: 10 maxWeight: 50 stepWeight: 2 sessionAffinity: cookieName: flagger-cookie maxAge: 21600``` .spec.analysis.sessionAffinity.cookieName is the name of the Cookie that is stored. The value of the cookie is a randomly generated string of characters that act as a unique identifier. For the above config, the response header of a request routed to the canary deployment during a Canary run will look like: ``` Set-Cookie: flagger-cookie=LpsIaLdoNZ; Max-Age=21600``` After a Canary run is over and all traffic is shifted back to the primary deployment, all responses will have the following header: ``` Set-Cookie: flagger-cookie=LpsIaLdoNZ; Max-Age=-1``` This tells the client to delete the cookie, making sure there are no junk cookies lying around in the user's system. If a new Canary run is triggered, the response header will set a new cookie for all requests routed to the Canary deployment: ``` Set-Cookie: flagger-cookie=McxKdLQoIN; Max-Age=21600``` Last updated 8 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "flagger-install-on-alibaba-servicemesh.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Flagger can be configured to automate the release process for Kubernetes workloads with a custom resource named canary. The canary custom resource defines the release process of an application running on Kubernetes and is portable across clusters, service meshes and ingress providers. For a deployment named podinfo, a canary release with progressive traffic shifting can be defined as: ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: podinfo spec: targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo service: port: 9898 analysis: interval: 1m threshold: 10 maxWeight: 50 stepWeight: 5 metrics: name: request-success-rate thresholdRange: min: 99 interval: 1m name: request-duration thresholdRange: max: 500 interval: 1m webhooks: name: load-test url: http://flagger-loadtester.test/ metadata: cmd: \"hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/\"``` When you deploy a new version of an app, Flagger gradually shifts traffic to the canary, and at the same time, measures the requests success rate as well as the average response duration. You can extend the canary analysis with custom metrics, acceptance and load testing to harden the validation process of your app release process. If you are running multiple service meshes or ingress controllers in the same cluster, you can override the global provider for a specific canary with spec.provider. A canary resource can target a Kubernetes Deployment or DaemonSet. Kubernetes Deployment example: ``` spec: progressDeadlineSeconds: 60 targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo autoscalerRef: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler name: podinfo primaryScalerReplicas: minReplicas: 2 maxReplicas: 5``` Based on the above configuration, Flagger generates the following Kubernetes objects: deployment/<targetRef.name>-primary hpa/<autoscalerRef.name>-primary The primary deployment is considered the stable release of your app, by default all traffic is routed to this version and the target deployment is scaled to zero. Flagger will detect changes to the target deployment (including secrets and configmaps) and will perform a canary analysis before promoting the new version as primary. Use .spec.autoscalerRef.primaryScalerReplicas to override the replica scaling configuration for the generated primary HorizontalPodAutoscaler. This is useful for situations when you want to have a different scaling configuration for the primary workload as opposed to using the same values from the original workload HorizontalPodAutoscaler. Note that the target deployment must have a single label selector in the format app: <DEPLOYMENT-NAME>: ``` apiVersion: apps/v1 kind: Deployment metadata: name: podinfo spec: selector: matchLabels: app: podinfo template: metadata: labels: app: podinfo``` In addition to app, Flagger supports name and app.kubernetes.io/name selectors. If you use a different convention you can specify your label with the -selector-labels=my-app-label command flag in the Flagger deployment manifest under containers args or by setting --set selectorLabels=my-app-label when installing Flagger with Helm. If the target deployment uses secrets and/or configmaps, Flagger will create a copy of each object using the -primary suffix and will reference these objects in the primary deployment. If you annotate your ConfigMap or Secret with flagger.app/config-tracking: disabled, Flagger will use the same object for the primary deployment instead of making a primary copy. You can disable the secrets/configmaps tracking globally with the -enable-config-tracking=false command flag in the Flagger deployment manifest under containers args or by setting --set configTracking.enabled=false when installing Flagger with Helm, but disabling config-tracking using the per Secret/ConfigMap annotation may fit your use-case" }, { "data": "The autoscaler reference is optional, when specified, Flagger will pause the traffic increase while the target and primary deployments are scaled up or down. HPA can help reduce the resource usage during the canary analysis. When the autoscaler reference is specified, any changes made to the autoscaler are only made active in the primary autoscaler when a rollout for the deployment starts and completes successfully. Optionally, you can create two HPAs, one for canary and one for the primary to update the HPA without doing a new rollout. As the canary deployment will be scaled to 0, the HPA on the canary will be inactive. Note Flagger requires autoscaling/v2 or autoscaling/v2beta2 API version for HPAs. The progress deadline represents the maximum time in seconds for the canary deployment to make progress before it is rolled back, defaults to ten minutes. A canary resource dictates how the target workload is exposed inside the cluster. The canary target should expose a TCP port that will be used by Flagger to create the ClusterIP Services. ``` spec: service: name: podinfo port: 9898 portName: http appProtocol: http targetPort: 9898 portDiscovery: true``` The container port from the target workload should match the service.port or service.targetPort. The service.name is optional, defaults to spec.targetRef.name. The service.targetPort can be a container port number or name. The service.portName is optional (defaults to http), if your workload uses gRPC then set the port name to grpc. The service.appProtocol is optional, more details can be found here. If port discovery is enabled, Flagger scans the target workload and extracts the containers ports excluding the port specified in the canary service and service mesh sidecar ports. These ports will be used when generating the ClusterIP services. Based on the canary spec service, Flagger creates the following Kubernetes ClusterIP service: <service.name>.<namespace>.svc.cluster.local selector app=<name>-primary <service.name>-primary.<namespace>.svc.cluster.local selector app=<name>-primary <service.name>-canary.<namespace>.svc.cluster.local selector app=<name> This ensures that traffic to podinfo.test:9898 will be routed to the latest stable release of your app. The podinfo-canary.test:9898 address is available only during the canary analysis and can be used for conformance testing or load testing. You can configure Flagger to set annotations and labels for the generated services with: ``` spec: service: port: 9898 apex: annotations: test: \"test\" labels: test: \"test\" canary: annotations: test: \"test\" labels: test: \"test\" primary: annotations: test: \"test\" labels: test: \"test\"``` Note that the apex annotations are added to both the generated Kubernetes Service and the generated service mesh/ingress object. This allows using external-dns with Istio VirtualServices and TraefikServices. Beware of configuration conflicts here. Besides port mapping and metadata, the service specification can contain URI match and rewrite rules, timeout and retry polices: ``` spec: service: port: 9898 match: uri: prefix: / rewrite: uri: / retries: attempts: 3 perTryTimeout: 1s timeout: 5s``` When using Istio as the mesh provider, you can also specify HTTP header operations, CORS and traffic policies, Istio gateways and hosts. The Istio routing configuration can be found" }, { "data": "You can use kubectl to get the current status of canary deployments cluster wide: ``` kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME test podinfo Progressing 15 2019-06-30T14:05:07Z prod frontend Succeeded 0 2019-06-30T16:15:07Z prod backend Failed 0 2019-06-30T17:05:07Z``` The status condition reflects the last known state of the canary analysis: ``` kubectl -n test get canary/podinfo -oyaml | awk '/status/,0'``` A successful rollout status: ``` status: canaryWeight: 0 failedChecks: 0 iterations: 0 lastAppliedSpec: \"14788816656920327485\" lastPromotedSpec: \"14788816656920327485\" conditions: lastTransitionTime: \"2019-07-10T08:23:18Z\" lastUpdateTime: \"2019-07-10T08:23:18Z\" message: Canary analysis completed successfully, promotion finished. reason: Succeeded status: \"True\" type: Promoted``` The Promoted status condition can have one of the following reasons: Initialized, Waiting, Progressing, WaitingPromotion, Promoting, Finalising, Succeeded or Failed. A failed canary will have the promoted status set to false, the reason to failed and the last applied spec will be different to the last promoted one. Wait for a successful rollout: ``` kubectl wait canary/podinfo --for=condition=promoted``` CI example: ``` kubectl set image deployment/podinfo podinfod=stefanprodan/podinfo:3.0.1 ok=false until ${ok}; do kubectl get canary/podinfo | grep 'Progressing' && ok=true || ok=false sleep 5 done kubectl wait canary/podinfo --for=condition=promoted --timeout=5m kubectl get canary/podinfo | grep Succeeded``` The default behavior of Flagger on canary deletion is to leave resources that aren't owned by the controller in their current state. This simplifies the deletion action and avoids possible deadlocks during resource finalization. In the event the canary was introduced with existing resource(s) (i.e. service, virtual service, etc.), they would be mutated during the initialization phase and no longer reflect their initial state. If the desired functionality upon deletion is to revert the resources to their initial state, the revertOnDeletion attribute can be enabled. ``` spec: revertOnDeletion: true``` When a deletion action is submitted to the cluster, Flagger will attempt to revert the following resources: Canary target replicas will be updated to the primary replica count Canary service selector will be reverted Mesh/Ingress traffic routed to the target The recommended approach to disable canary analysis would be utilization of the skipAnalysis attribute, which limits the need for resource reconciliation. Utilizing the revertOnDeletion attribute should be enabled when you no longer plan to rely on Flagger for deployment management. Note When this feature is enabled expect a delay in the delete action due to the reconciliation. The canary analysis defines: the type of deployment strategy the metrics used to validate the canary version the webhooks used for conformance testing, load testing and manual gating the alerting settings Spec: ``` analysis: interval: threshold: maxWeight: stepWeight: stepWeightPromotion: iterations: primaryReadyThreshold: 100 canaryReadyThreshold: 100 match: # HTTP header metrics: # metric check alerts: # alert provider webhooks: # hook``` The canary analysis runs periodically until it reaches the maximum traffic weight or the number of iterations. On each run, Flagger calls the webhooks, checks the metrics and if the failed checks threshold is reached, stops the analysis and rolls back the canary. If alerting is configured, Flagger will post the analysis result using the alert providers. The suspend field can be set to true to suspend the Canary. If a Canary is suspended, its reconciliation is completely paused. This means that changes to target workloads, tracked ConfigMaps and Secrets don't trigger a Canary run and changes to resources generated by Flagger are not corrected. If the Canary was suspended during an active Canary run, then the run is paused without disturbing the workloads or the traffic weights. Last updated 8 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "gloo-progressive-delivery.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to use the Gloo Edge ingress controller and Flagger to automate canary releases and A/B testing. Flagger requires a Kubernetes cluster v1.16 or newer and Gloo Edge ingress 1.6.0 or newer. This guide was written for Flagger version 1.6.0 or higher. Prior versions of Flagger used Gloo UpstreamGroups to handle canaries, but newer versions of Flagger use Gloo RouteTables to handle canaries as well as A/B testing. Install Gloo with Helm v3: ``` helm repo add gloo https://storage.googleapis.com/solo-public-helm kubectl create ns gloo-system helm upgrade -i gloo gloo/gloo \\ --namespace gloo-system``` Install Flagger and the Prometheus add-on in the same namespace as Gloo: ``` helm repo add flagger https://flagger.app helm upgrade -i flagger flagger/flagger \\ --namespace gloo-system \\ --set prometheus.install=true \\ --set meshProvider=gloo``` Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services, Gloo route tables and upstreams). These objects expose the application outside the cluster and drive the canary analysis and promotion. Create a test namespace: ``` kubectl create ns test``` Create a deployment and a horizontal pod autoscaler: ``` kubectl -n test apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main``` Deploy the load testing service to generate traffic during the canary analysis: ``` kubectl -n test apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main``` Create a virtual service definition that references a route table that will be generated by Flagger (replace app.example.com with your own domain): ``` apiVersion: gateway.solo.io/v1 kind: VirtualService metadata: name: podinfo namespace: test spec: virtualHost: domains: 'app.example.com' routes: matchers: prefix: / delegateAction: ref: name: podinfo namespace: test``` Save the above resource as podinfo-virtualservice.yaml and then apply it: ``` kubectl apply -f ./podinfo-virtualservice.yaml``` Create a canary custom resource (replace app.example.com with your own domain): ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: podinfo namespace: test spec: provider: gloo targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo autoscalerRef: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler name: podinfo service: port: 9898 targetPort: 9898 analysis: interval: 10s threshold: 5 maxWeight: 50 stepWeight: 5 metrics: name: request-success-rate thresholdRange: min: 99 interval: 1m name: request-duration thresholdRange: max: 500 interval: 30s webhooks: name: acceptance-test type: pre-rollout url: http://flagger-loadtester.test/ timeout: 10s metadata: type: bash cmd: \"curl -sd 'test' http://podinfo-canary:9898/token | grep token\" name: load-test url: http://flagger-loadtester.test/ timeout: 5s metadata: type: cmd cmd: \"hey -z 2m -q 5 -c 2 -host app.example.com http://gateway-proxy.gloo-system\"``` Note: when using upstreamRef the following fields are copied over from the original upstream: Labels, SslConfig, CircuitBreakers, ConnectionConfig, UseHttp2, InitialStreamWindowSize Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo horizontalpodautoscaler.autoscaling/podinfo virtualservices.gateway.solo.io/podinfo canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary routetables.gateway.solo.io/podinfo upstreams.gloo.solo.io/test-podinfo-canaryupstream-9898 upstreams.gloo.solo.io/test-podinfo-primaryupstream-9898``` When the bootstrap finishes Flagger will set the canary status to initialized: ``` kubectl -n test get canary podinfo NAME STATUS WEIGHT LASTTRANSITIONTIME podinfo Initialized 0 2019-05-17T08:09:51Z``` Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack. Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\" }, { "data": "Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Failed Checks: 0 Phase: Succeeded Events: Type Reason Age From Message - - - Normal Synced 3m flagger New revision detected podinfo.test Normal Synced 3m flagger Scaling up podinfo.test Warning Synced 3m flagger Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available Normal Synced 3m flagger Advance podinfo.test canary weight 5 Normal Synced 3m flagger Advance podinfo.test canary weight 10 Normal Synced 3m flagger Advance podinfo.test canary weight 15 Normal Synced 2m flagger Advance podinfo.test canary weight 20 Normal Synced 2m flagger Advance podinfo.test canary weight 25 Normal Synced 1m flagger Advance podinfo.test canary weight 30 Normal Synced 1m flagger Advance podinfo.test canary weight 35 Normal Synced 55s flagger Advance podinfo.test canary weight 40 Normal Synced 45s flagger Advance podinfo.test canary weight 45 Normal Synced 35s flagger Advance podinfo.test canary weight 50 Normal Synced 25s flagger Copying podinfo.test template spec to podinfo-primary.test Warning Synced 15s flagger Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Normal Synced 5s flagger Promotion completed! Scaling down podinfo.test``` Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME test podinfo Progressing 15 2019-05-17T14:05:07Z prod frontend Succeeded 0 2019-05-17T16:15:07Z prod backend Failed 0 2019-05-17T17:05:07Z``` During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses and rolls back the faulted version. Trigger another canary deployment: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.2``` Generate HTTP 500 errors: ``` watch curl -H 'Host: app.example.com' http://gateway-proxy.gloo-system/status/500``` Generate high latency: ``` watch curl -H 'Host: app.example.com' http://gateway-proxy.gloo-system/delay/2``` When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Failed Checks: 10 Phase: Failed Events: Type Reason Age From Message - - - Normal Synced 3m flagger Starting canary deployment for podinfo.test Normal Synced 3m flagger Advance podinfo.test canary weight 5 Normal Synced 3m flagger Advance podinfo.test canary weight 10 Normal Synced 3m flagger Advance podinfo.test canary weight 15 Normal Synced 3m flagger Halt podinfo.test advancement success rate 69.17% < 99% Normal Synced 2m flagger Halt podinfo.test advancement success rate 61.39% < 99% Normal Synced 2m flagger Halt podinfo.test advancement success rate 55.06% < 99% Normal Synced 2m flagger Halt podinfo.test advancement success rate 47.00% < 99% Normal Synced 2m flagger (combined from similar events): Halt podinfo.test advancement success rate 38.08% < 99% Warning Synced 1m flagger Rolling back podinfo.test failed checks threshold reached 10 Warning Synced 1m flagger Canary failed! Scaling down podinfo.test``` The canary analysis can be extended with Prometheus queries. The demo app is instrumented with Prometheus so you can create a custom check that will use the HTTP request duration histogram to validate the canary. Create a metric template and apply it on the cluster: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: not-found-percentage namespace: test spec: provider: type: prometheus address:" }, { "data": "query: | 100 - sum( rate( httprequestdurationsecondscount{ kubernetes_namespace=\"{{ namespace }}\", kubernetespodname=~\"{{ target }}-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)\" status!=\"{{ interval }}\" }[1m] ) ) / sum( rate( httprequestdurationsecondscount{ kubernetes_namespace=\"{{ namespace }}\", kubernetespodname=~\"{{ target }}-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)\" }[{{ interval }}] ) ) * 100``` Edit the canary analysis and add the following metric: ``` analysis: metrics: name: \"404s percentage\" templateRef: name: not-found-percentage thresholdRange: max: 5 interval: 1m``` The above configuration validates the canary by checking if the HTTP 404 req/sec percentage is below 5 percent of the total traffic. If the 404s rate reaches the 5% threshold, then the canary fails. Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.3``` Generate 404s: ``` watch curl -H 'Host: app.example.com' http://gateway-proxy.gloo-system/status/404``` Watch Flagger logs: ``` kubectl -n gloo-system logs deployment/flagger -f | jq .msg Starting canary deployment for podinfo.test Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Halt podinfo.test advancement 404s percentage 6.20 > 5 Halt podinfo.test advancement 404s percentage 6.45 > 5 Halt podinfo.test advancement 404s percentage 7.60 > 5 Halt podinfo.test advancement 404s percentage 8.69 > 5 Halt podinfo.test advancement 404s percentage 9.70 > 5 Rolling back podinfo.test failed checks threshold reached 5 Canary failed! Scaling down podinfo.test``` If you have alerting configured, Flagger will send a notification with the reason why the canary failed. Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for frontend applications that require session affinity. Edit the canary analysis, remove the max/step weight and add the match conditions and iterations: ``` analysis: interval: 1m threshold: 5 iterations: 10 match: headers: x-canary: exact: \"insider\" webhooks: name: load-test url: http://flagger-loadtester.test/ metadata: cmd: \"hey -z 1m -q 5 -c 5 -H 'X-Canary: insider' -host app.example.com http://gateway-proxy.gloo-system\"``` The above configuration will run an analysis for ten minutes targeting users that have a X-Canary: insider header. Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.4``` Flagger detects that the deployment revision changed and starts the A/B test: ``` kubectl -n gloo-system logs deploy/flagger -f | jq .msg New revision detected! Progressing canary analysis for podinfo.test Advance podinfo.test canary iteration 1/10 Advance podinfo.test canary iteration 2/10 Advance podinfo.test canary iteration 3/10 Advance podinfo.test canary iteration 4/10 Advance podinfo.test canary iteration 5/10 Advance podinfo.test canary iteration 6/10 Advance podinfo.test canary iteration 7/10 Advance podinfo.test canary iteration 8/10 Advance podinfo.test canary iteration 9/10 Advance podinfo.test canary iteration 10/10 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Routing all traffic to primary Promotion completed! Scaling down podinfo.test``` The web browser user agent header allows user segmentation based on device or OS. For example, if you want to route all mobile users to the canary instance: ``` match: headers: user-agent: regex: \".Mobile.\"``` Or if you want to target only Android users: ``` match: headers: user-agent: regex: \".Android.\"``` Or a specific browser version: ``` match: headers: user-agent: regex: \".Firefox.\"``` For an in-depth look at the analysis process read the usage docs. Last updated 8 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "kuma-progressive-delivery.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This document describes how to release Flagger. To release a new Flagger version (e.g. 2.0.0) follow these steps: create a branch git checkout -b release-2.0.0 set the version in code and manifests TAG=2.0.0 make version-set commit changes and merge PR checkout main git checkout main && git pull tag main make release To release a new Flagger load tester version (e.g. 2.0.0) follow these steps: create a branch git checkout -b release-ld-2.0.0 set the version in code (cmd/loadtester/main.go#VERSION) set the version in the Helm chart (charts/loadtester/Chart.yaml and values.yaml) set the version in manifests (kustomize/tester/deployment.yaml) commit changes and push the branch upstream in GitHub UI, navigate to Actions and run the push-ld workflow selecting the release branch after the workflow finishes, open the PR which will run the e2e tests using the new tester version merge the PR if the tests pass After the tag has been pushed to GitHub, the CI release pipeline does the following: creates a GitHub release pushes the Flagger binary and change log to GitHub release pushes the Flagger container image to GitHub Container Registry pushed the Flagger install manifests to GitHub Container Registry signs all OCI artifacts and release assets with Cosign and GitHub OIDC pushes the Helm chart to github-pages branch GitHub pages publishes the new chart version on the Helm repository The documentation website is built from the docs branch. After a Flagger release, publish the docs with: git checkout main && git pull git checkout docs git rebase main git push origin docs Lastly open a PR with all the docs changes on fluxcd/website to update fluxcd.io/flagger. Last updated 1 year ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "linkerd-progressive-delivery.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide walks you through setting up Flagger and AWS App Mesh on EKS. The App Mesh integration with EKS is made out of the following components: Kubernetes custom resources mesh.appmesh.k8s.aws defines a logical boundary for network traffic between the services virtualnode.appmesh.k8s.aws defines a logical pointer to a Kubernetes workload virtualservice.appmesh.k8s.aws defines the routing rules for a workload inside the mesh CRD controller - keeps the custom resources in sync with the App Mesh control plane Admission controller - injects the Envoy sidecar and assigns Kubernetes pods to App Mesh virtual nodes Telemetry service - Prometheus instance that collects and stores Envoy's metrics In order to create an EKS cluster you can use eksctl. Eksctl is an open source command-line utility made by Weaveworks in collaboration with Amazon. On MacOS you can install eksctl with Homebrew: ``` brew tap weaveworks/tap brew install weaveworks/tap/eksctl``` Create an EKS cluster with: ``` eksctl create cluster --name=appmesh \\ --region=us-west-2 \\ --nodes 3 \\ --node-volume-size=120 \\ --appmesh-access``` The above command will create a two nodes cluster with App Mesh IAM policy attached to the EKS node instance role. Verify the install with: ``` kubectl get nodes``` Install the Helm v3 command-line tool: ``` brew install helm``` Add the EKS repository to Helm: ``` helm repo add eks https://aws.github.io/eks-charts``` Install the Horizontal Pod Autoscaler (HPA) metrics provider: ``` helm upgrade -i metrics-server stable/metrics-server \\ --namespace kube-system \\ --set args[0]=--kubelet-preferred-address-types=InternalIP``` After a minute, the metrics API should report CPU and memory usage for pods. You can very the metrics API with: ``` kubectl -n kube-system top pods``` Install the App Mesh CRDs: ``` kubectl apply -k github.com/aws/eks-charts/stable/appmesh-controller//crds?ref=master``` Create the appmesh-system namespace: ``` kubectl create ns appmesh-system``` Install the App Mesh controller: ``` helm upgrade -i appmesh-controller eks/appmesh-controller \\ --wait --namespace appmesh-system``` In order to collect the App Mesh metrics that Flagger needs to run the canary analysis, you'll need to setup a Prometheus instance to scrape the Envoy sidecars. Install the App Mesh Prometheus: ``` helm upgrade -i appmesh-prometheus eks/appmesh-prometheus \\ --wait --namespace appmesh-system``` Add Flagger Helm repository: ``` helm repo add flagger https://flagger.app``` Install Flagger's Canary CRD: ``` kubectl apply -f https://raw.githubusercontent.com/fluxcd/flagger/main/artifacts/flagger/crd.yaml``` Deploy Flagger in the appmesh-system namespace: ``` helm upgrade -i flagger flagger/flagger \\ --namespace=appmesh-system \\ --set crd.create=false \\ --set meshProvider=appmesh:v1beta2 \\ --set metricsServer=http://appmesh-prometheus:9090``` Deploy App Mesh Grafana that comes with a dashboard for monitoring Flagger's canary releases: ``` helm upgrade -i appmesh-grafana eks/appmesh-grafana \\ --namespace appmesh-system``` You can access Grafana using port forwarding: ``` kubectl -n appmesh-system port-forward svc/appmesh-grafana 3000:3000``` Now that you have Flagger running, you can try the App Mesh canary deployments tutorial. Last updated 3 years ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "monitoring.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Flagger comes with a Grafana dashboard made for canary analysis. Install Grafana with Helm: ``` helm upgrade -i flagger-grafana flagger/grafana \\ --set url=http://prometheus:9090``` The dashboard shows the RED and USE metrics for the primary and canary workloads: The canary errors and latency spikes have been recorded as Kubernetes events and logged by Flagger in json format: ``` kubectl -n istio-system logs deployment/flagger --tail=100 | jq .msg Starting canary deployment for podinfo.test Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Advance podinfo.test canary weight 20 Advance podinfo.test canary weight 25 Advance podinfo.test canary weight 30 Advance podinfo.test canary weight 35 Halt podinfo.test advancement success rate 98.69% < 99% Advance podinfo.test canary weight 40 Halt podinfo.test advancement request duration 1.515s > 500ms Advance podinfo.test canary weight 45 Advance podinfo.test canary weight 50 Copying podinfo.test template spec to podinfo-primary.test Halt podinfo-primary.test advancement waiting for rollout to finish: 1 old replicas are pending termination Scaling down podinfo.test Promotion completed! podinfo.test``` Flagger can be configured to send event payloads to a specified webhook: ``` helm upgrade -i flagger flagger/flagger \\ --set eventWebhook=https://example.com/flagger-canary-event-webhook``` The environment variable EVENTWEBHOOKURL can be used for activating the event-webhook, too. This is handy for using a secret to store a sensible value that could contain api keys for example. When configured, every action that Flagger takes during a canary deployment will be sent as JSON via an HTTP POST request. The JSON payload has the following schema: ``` { \"name\": \"string (canary name)\", \"namespace\": \"string (canary namespace)\", \"phase\": \"string (canary phase)\", \"metadata\": { \"eventMessage\": \"string (canary event message)\", \"eventType\": \"string (canary event type)\", \"timestamp\": \"string (unix timestamp ms)\" } }``` Example: ``` { \"name\": \"podinfo\", \"namespace\": \"default\", \"phase\": \"Progressing\", \"metadata\": { \"eventMessage\": \"New revision detected! Scaling up podinfo.default\", \"eventType\": \"Normal\", \"timestamp\": \"1578607635167\" } }``` The event webhook can be overwritten at canary level with: ``` analysis: webhooks: name: \"send to Slack\" type: event url: http://event-recevier.notifications/slack``` Flagger exposes Prometheus metrics that can be used to determine the canary analysis status and the destination weight values: ``` flaggerinfo{version=\"0.10.0\", meshprovider=\"istio\"} 1 flaggercanarytotal{namespace=\"test\"} 1 flaggercanarystatus{name=\"podinfo\" namespace=\"test\"} 1 flaggercanaryweight{workload=\"podinfo-primary\" namespace=\"test\"} 95 flaggercanaryweight{workload=\"podinfo\" namespace=\"test\"} 5 flaggercanarydurationsecondsbucket{name=\"podinfo\",namespace=\"test\",le=\"10\"} 6 flaggercanarydurationsecondsbucket{name=\"podinfo\",namespace=\"test\",le=\"+Inf\"} 6 flaggercanarydurationsecondssum{name=\"podinfo\",namespace=\"test\"} 17.3561329 flaggercanarydurationsecondscount{name=\"podinfo\",namespace=\"test\"} 6 flaggercanarymetric_analysis{metric=\"podinfo-http-successful-rate\",name=\"podinfo\",namespace=\"test\"} 1 flaggercanarymetric_analysis{metric=\"podinfo-custom-metric\",name=\"podinfo\",namespace=\"test\"} 0.918223108974359``` Last updated 2 years ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "nginx-progressive-delivery.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Flagger can be configured to send alerts to various chat platforms. You can define a global alert provider at install time or configure alerts on a per canary basis. Flagger requires a custom webhook integration from slack, instead of the new slack app system. The webhook can be generated by following the legacy slack documentation Once the webhook has been generated. Flagger can be configured to send Slack notifications: ``` helm upgrade -i flagger flagger/flagger \\ --set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \\ --set slack.proxy=my-http-proxy.com \\ # optional http/s proxy --set slack.channel=general \\ --set slack.user=flagger \\ --set clusterName=my-cluster``` Once configured with a Slack incoming webhook, Flagger will post messages when a canary deployment has been initialised, when a new revision has been detected and if the canary analysis failed or succeeded. A canary deployment will be rolled back if the progress deadline exceeded or if the analysis reached the maximum number of failed checks: For using a Slack bot token, you should add token to a secret and use secretRef. Flagger can be configured to send notifications to Microsoft Teams: ``` helm upgrade -i flagger flagger/flagger \\ --set msteams.url=https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK \\ --set msteams.proxy-url=my-http-proxy.com # optional http/s proxy``` Similar to Slack, Flagger alerts on canary analysis events: Configuring alerting globally has several limitations as it's not possible to specify different channels or configure the verbosity on a per canary basis. To make the alerting move flexible, the canary analysis can be extended with a list of alerts that reference an alert provider. For each alert, users can configure the severity level. The alerts section overrides the global setting. Slack example: ``` apiVersion: flagger.app/v1beta1 kind: AlertProvider metadata: name: on-call namespace: flagger spec: type: slack channel: on-call-alerts username: flagger address: https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK proxy: http://my-http-proxy.com secretRef: name: on-call-url apiVersion: v1 kind: Secret metadata: name: on-call-url namespace: flagger data: address: <encoded-url> token: <encoded-token>``` The alert provider type can be: slack, msteams, rocket or discord. When set to discord, Flagger will use Slack formatting and will append /slack to the Discord address. When not specified, channel defaults to general and username defaults to flagger. When secretRef is specified, the Kubernetes secret must contain a data field named address, the address in the secret will take precedence over the address field in the provider spec. The canary analysis can have a list of alerts, each alert referencing an alert provider: ``` analysis: alerts: name: \"on-call Slack\" severity: error providerRef: name: on-call namespace: flagger name: \"qa Discord\" severity: warn providerRef: name: qa-discord name: \"dev MS Teams\" severity: info providerRef: name: dev-msteams``` Alert fields: name (required) severity levels: info, warn, error (default info) providerRef.name alert provider name (required) providerRef.namespace alert provider namespace (defaults to the canary namespace) When the severity is set to warn, Flagger will alert when waiting on manual confirmation or if the analysis fails. When the severity is set to error, Flagger will alert only if the canary analysis fails. To differentiate alerts based on the cluster name, you can configure Flagger with the -cluster-name=my-cluster command flag, or with Helm --set clusterName=my-cluster. You can use Alertmanager to trigger alerts when a canary deployment failed: ``` alert: canary_rollback expr: flaggercanarystatus > 1 for: 1m labels: severity: warning annotations: summary: \"Canary failed\" description: \"Workload {{ $labels.name }} namespace {{ $labels.namespace }}\"``` Last updated 1 year ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "prometheus-operator.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to use Kuma and Flagger to automate canary deployments. Flagger requires a Kubernetes cluster v1.19 or newer and Kuma 1.7 or newer. Install Kuma and Prometheus (part of Kuma Metrics): ``` kumactl install control-plane | kubectl apply -f - kumactl install observability --components \"grafana,prometheus\" | kubectl apply -f -``` Install Flagger in the kuma-system namespace: ``` kubectl apply -k github.com/fluxcd/flagger//kustomize/kuma``` Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services and Kuma TrafficRoute). These objects expose the application inside the mesh and drive the canary analysis and promotion. Create a test namespace and enable Kuma sidecar injection: ``` kubectl create ns test kubectl annotate namespace test kuma.io/sidecar-injection=enabled``` Install the load testing service to generate traffic during the canary analysis: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main``` Create a deployment and a horizontal pod autoscaler: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main``` Create a canary custom resource for the podinfo deployment: ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: podinfo namespace: test annotations: kuma.io/mesh: default spec: targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo progressDeadlineSeconds: 60 service: port: 9898 targetPort: 9898 apex: annotations: 9898.service.kuma.io/protocol: \"http\" canary: annotations: 9898.service.kuma.io/protocol: \"http\" primary: annotations: 9898.service.kuma.io/protocol: \"http\" analysis: interval: 30s threshold: 5 maxWeight: 50 stepWeight: 5 metrics: name: request-success-rate threshold: 99 interval: 1m name: request-duration threshold: 500 interval: 30s webhooks: name: acceptance-test type: pre-rollout url: http://flagger-loadtester.test/ timeout: 30s metadata: type: bash cmd: \"curl -sd 'test' http://podinfo-canary.test:9898/token | grep token\" name: load-test type: rollout url: http://flagger-loadtester.test/ metadata: cmd: \"hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/\"``` Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every half a minute. After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo horizontalpodautoscaler.autoscaling/podinfo ingresses.extensions/podinfo canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary trafficroutes.kuma.io/podinfo``` After the bootstrap, the podinfo deployment will be scaled to zero and the traffic to podinfo.test will be routed to the primary pods. During the canary analysis, the podinfo-canary.test address can be used to target directly the canary pods. Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack. Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\" }, { "data": "Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Failed Checks: 0 Phase: Succeeded Events: New revision detected! Scaling up podinfo.test Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Advance podinfo.test canary weight 20 Advance podinfo.test canary weight 25 Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available Advance podinfo.test canary weight 30 Advance podinfo.test canary weight 35 Advance podinfo.test canary weight 40 Advance podinfo.test canary weight 45 Advance podinfo.test canary weight 50 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Promotion completed! Scaling down podinfo.test``` Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. A canary deployment is triggered by changes in any of the following objects: Deployment PodSpec (container image, command, ports, env, resources, etc) ConfigMaps mounted as volumes or mapped to environment variables Secrets mounted as volumes or mapped to environment variables You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME test podinfo Progressing 15 2019-06-30T14:05:07Z prod frontend Succeeded 0 2019-06-30T16:15:07Z prod backend Failed 0 2019-06-30T17:05:07Z``` During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses and rolls back the faulted version. Trigger another canary deployment: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.2``` Exec into the load tester pod with: ``` kubectl -n test exec -it flagger-loadtester-xx-xx sh``` Generate HTTP 500 errors: ``` watch -n 1 curl http://podinfo-canary.test:9898/status/500``` Generate latency: ``` watch -n 1 curl http://podinfo-canary.test:9898/delay/1``` When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Failed Checks: 10 Phase: Failed Events: Starting canary analysis for podinfo.test Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Halt podinfo.test advancement success rate 69.17% < 99% Halt podinfo.test advancement success rate 61.39% < 99% Halt podinfo.test advancement success rate 55.06% < 99% Halt podinfo.test advancement request duration 1.20s > 0.5s Halt podinfo.test advancement request duration 1.45s > 0.5s Rolling back podinfo.test failed checks threshold reached 5 Canary failed! Scaling down podinfo.test``` The above procedures can be extended with custom metrics checks, webhooks, manual promotion approval and Slack or MS Teams notifications. Last updated 1 year ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "release-guide.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to use App Mesh and Flagger to automate canary deployments. You'll need an EKS cluster (Kubernetes >= 1.16) configured with App Mesh, you can find the installation guide here. Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services, App Mesh virtual nodes and services). These objects expose the application on the mesh and drive the canary analysis and promotion. The only App Mesh object you need to create by yourself is the mesh resource. Create a mesh called global: ``` cat << EOF | kubectl apply -f - apiVersion: appmesh.k8s.aws/v1beta2 kind: Mesh metadata: name: global spec: namespaceSelector: matchLabels: appmesh.k8s.aws/sidecarInjectorWebhook: enabled EOF``` Create a test namespace with App Mesh sidecar injection enabled: ``` cat << EOF | kubectl apply -f - apiVersion: v1 kind: Namespace metadata: name: test labels: appmesh.k8s.aws/sidecarInjectorWebhook: enabled EOF``` Create a deployment and a horizontal pod autoscaler: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main``` Deploy the load testing service to generate traffic during the canary analysis: ``` helm upgrade -i flagger-loadtester flagger/loadtester \\ --namespace=test \\ --set appmesh.enabled=true \\ --set \"appmesh.backends[0]=podinfo\" \\ --set \"appmesh.backends[1]=podinfo-canary\"``` Create a canary definition: ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: annotations: appmesh.flagger.app/accesslog: enabled name: podinfo namespace: test spec: provider: appmesh:v1beta2 targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo progressDeadlineSeconds: 60 autoscalerRef: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler name: podinfo service: port: 9898 timeout: 15s retries: attempts: 3 perTryTimeout: 5s retryOn: \"gateway-error,client-error,stream-error\" match: uri: prefix: / rewrite: uri: / analysis: interval: 1m threshold: 5 maxWeight: 50 stepWeight: 5 metrics: name: request-success-rate thresholdRange: min: 99 interval: 1m name: request-duration thresholdRange: max: 500 interval: 30s webhooks: name: acceptance-test type: pre-rollout url: http://flagger-loadtester.test/ timeout: 30s metadata: type: bash cmd: \"curl -sd 'test' http://podinfo-canary.test:9898/token | grep token\" name: load-test url: http://flagger-loadtester.test/ timeout: 5s metadata: cmd: \"hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/\"``` Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo horizontalpodautoscaler.autoscaling/podinfo canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary virtualnode.appmesh.k8s.aws/podinfo-canary virtualnode.appmesh.k8s.aws/podinfo-primary virtualrouter.appmesh.k8s.aws/podinfo virtualrouter.appmesh.k8s.aws/podinfo-canary virtualservice.appmesh.k8s.aws/podinfo virtualservice.appmesh.k8s.aws/podinfo-canary``` After the bootstrap, the podinfo deployment will be scaled to zero and the traffic to podinfo.test will be routed to the primary pods. During the canary analysis, the podinfo-canary.test address can be used to target directly the canary pods. App Mesh blocks all egress traffic by default. If your application needs to call another service, you have to create an App Mesh virtual service for it and add the virtual service name to the backend list. ``` service: port: 9898 backends: backend1 arn:aws:appmesh:eu-west-1:12345678910:mesh/my-mesh/virtualService/backend2``` In order to expose the podinfo app outside the mesh you can use the App Mesh" }, { "data": "Deploy the App Mesh Gateway behind an AWS NLB: ``` helm upgrade -i appmesh-gateway eks/appmesh-gateway \\ --namespace test``` Find the gateway public address: ``` export URL=\"http://$(kubectl -n test get svc/appmesh-gateway -ojson | jq -r \".status.loadBalancer.ingress[].hostname\")\" echo $URL``` Wait for the NLB to become active: ``` watch curl -sS $URL``` Create a gateway route that points to the podinfo virtual service: ``` cat << EOF | kubectl apply -f - apiVersion: appmesh.k8s.aws/v1beta2 kind: GatewayRoute metadata: name: podinfo namespace: test spec: httpRoute: match: prefix: \"/\" action: target: virtualService: virtualServiceRef: name: podinfo EOF``` Open your browser and navigate to the ingress address to access podinfo UI. A canary deployment is triggered by changes in any of the following objects: Deployment PodSpec (container image, command, ports, env, resources, etc) ConfigMaps and Secrets mounted as volumes or mapped to environment variables Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.1``` Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Failed Checks: 0 Phase: Succeeded Events: New revision detected! Scaling up podinfo.test Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Advance podinfo.test canary weight 20 Advance podinfo.test canary weight 25 Advance podinfo.test canary weight 30 Advance podinfo.test canary weight 35 Advance podinfo.test canary weight 40 Advance podinfo.test canary weight 45 Advance podinfo.test canary weight 50 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Routing all traffic to primary Promotion completed! Scaling down podinfo.test``` When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. During the analysis the canarys progress can be monitored with Grafana. The App Mesh dashboard URL is http://localhost:3000/d/flagger-appmesh/appmesh-canary?refresh=10s&orgId=1&var-namespace=test&var-primary=podinfo-primary&var-canary=podinfo. You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT test podinfo Progressing 15 prod frontend Succeeded 0 prod backend Failed 0``` If youve enabled the Slack notifications, you should receive the following messages: During the canary analysis you can generate HTTP 500 errors or high latency to test if Flagger pauses the rollout. Trigger a canary deployment: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.2``` Exec into the load tester pod with: ``` kubectl -n test exec -it deploy/flagger-loadtester bash``` Generate HTTP 500 errors: ``` hey -z 1m -c 5 -q 5 http://podinfo-canary.test:9898/status/500``` Generate latency: ``` watch -n 1 curl" }, { "data": "When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. ``` kubectl -n appmesh-system logs deploy/flagger -f | jq .msg New revision detected! progressing canary analysis for podinfo.test Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Halt podinfo.test advancement success rate 69.17% < 99% Halt podinfo.test advancement success rate 61.39% < 99% Halt podinfo.test advancement success rate 55.06% < 99% Halt podinfo.test advancement request duration 1.20s > 0.5s Halt podinfo.test advancement request duration 1.45s > 0.5s Rolling back podinfo.test failed checks threshold reached 5 Canary failed! Scaling down podinfo.test``` If youve enabled the Slack notifications, youll receive a message if the progress deadline is exceeded, or if the analysis reached the maximum number of failed checks: Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for frontend applications that require session affinity. Edit the canary analysis, remove the max/step weight and add the match conditions and iterations: ``` analysis: interval: 1m threshold: 5 iterations: 10 match: headers: x-canary: exact: \"insider\" webhooks: name: load-test url: http://flagger-loadtester.test/ metadata: cmd: \"hey -z 1m -q 10 -c 2 -H 'X-Canary: insider' http://podinfo.test:9898/\"``` The above configuration will run an analysis for ten minutes targeting users that have a X-Canary: insider header. You can also use a HTTP cookie, to target all users with a canary cookie set to insider the match condition should be: ``` match: headers: cookie: regex: \"^(.?;)?(canary=insider)(;.)?$\" webhooks: name: load-test url: http://flagger-loadtester.test/ metadata: cmd: \"hey -z 1m -q 10 -c 2 -H 'Cookie: canary=insider' http://podinfo.test:9898/\"``` Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.3``` Flagger detects that the deployment revision changed and starts the A/B test: ``` kubectl -n appmesh-system logs deploy/flagger -f | jq .msg New revision detected! progressing canary analysis for podinfo.test Advance podinfo.test canary iteration 1/10 Advance podinfo.test canary iteration 2/10 Advance podinfo.test canary iteration 3/10 Advance podinfo.test canary iteration 4/10 Advance podinfo.test canary iteration 5/10 Advance podinfo.test canary iteration 6/10 Advance podinfo.test canary iteration 7/10 Advance podinfo.test canary iteration 8/10 Advance podinfo.test canary iteration 9/10 Advance podinfo.test canary iteration 10/10 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Routing all traffic to primary Promotion completed! Scaling down podinfo.test``` The above procedure can be extended with custom metrics checks, webhooks, manual promotion approval and Slack or MS Teams notifications. Last updated 8 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "upgrade-guide.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to automate Blue/Green deployments with Flagger and Kubernetes. For applications that are not deployed on a service mesh, Flagger can orchestrate Blue/Green style deployments with Kubernetes L4 networking. When using a service mesh blue/green can be used as specified here. Flagger requires a Kubernetes cluster v1.16 or newer. Install Flagger and the Prometheus add-on: ``` helm repo add flagger https://flagger.app helm upgrade -i flagger flagger/flagger \\ --namespace flagger \\ --set prometheus.install=true \\ --set meshProvider=kubernetes``` If you already have a Prometheus instance running in your cluster, you can point Flagger to the ClusterIP service with: ``` helm upgrade -i flagger flagger/flagger \\ --namespace flagger \\ --set metricsServer=http://prometheus.monitoring:9090``` Optionally you can enable Slack notifications: ``` helm upgrade -i flagger flagger/flagger \\ --reuse-values \\ --namespace flagger \\ --set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \\ --set slack.channel=general \\ --set slack.user=flagger``` Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployment and ClusterIP services). These objects expose the application inside the cluster and drive the canary analysis and Blue/Green promotion. Create a test namespace: ``` kubectl create ns test``` Create a deployment and a horizontal pod autoscaler: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main``` Deploy the load testing service to generate traffic during the analysis: ``` helm upgrade -i flagger-loadtester flagger/loadtester \\ --namespace=test``` Create a canary custom resource: ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: podinfo namespace: test spec: provider: kubernetes targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo progressDeadlineSeconds: 60 autoscalerRef: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler name: podinfo service: port: 9898 portDiscovery: true analysis: interval: 30s threshold: 2 iterations: 10 metrics: name: request-success-rate thresholdRange: min: 99 interval: 1m name: request-duration thresholdRange: max: 500 interval: 30s webhooks: name: smoke-test type: pre-rollout url: http://flagger-loadtester.test/ timeout: 15s metadata: type: bash cmd: \"curl -sd 'anon' http://podinfo-canary.test:9898/token | grep token\" name: load-test url: http://flagger-loadtester.test/ timeout: 5s metadata: type: cmd cmd: \"hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/\"``` The above configuration will run an analysis for five minutes. Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo horizontalpodautoscaler.autoscaling/podinfo canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary``` Blue/Green scenario: on bootstrap, Flagger will create three ClusterIP services (app-primary,app-canary, app) and a shadow deployment named app-primary that represents the blue version when a new version is detected, Flagger would scale up the green version and run the conformance tests (the tests should target the app-canary ClusterIP service to reach the green version) if the conformance tests are passing, Flagger would start the load tests and validate them with custom Prometheus queries if the load test analysis is successful, Flagger will promote the new version to app-primary and scale down the green version Trigger a deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.1``` Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Events: New revision detected podinfo.test Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available Pre-rollout check acceptance-test passed Advance podinfo.test canary iteration 1/10 Advance podinfo.test canary iteration 2/10 Advance podinfo.test canary iteration 3/10 Advance podinfo.test canary iteration 4/10 Advance podinfo.test canary iteration 5/10 Advance podinfo.test canary iteration 6/10 Advance podinfo.test canary iteration 7/10 Advance podinfo.test canary iteration 8/10 Advance podinfo.test canary iteration 9/10 Advance podinfo.test canary iteration 10/10 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Promotion completed! Scaling down" }, { "data": "Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME test podinfo Progressing 100 2019-06-16T14:05:07Z prod frontend Succeeded 0 2019-06-15T16:15:07Z prod backend Failed 0 2019-06-14T17:05:07Z``` During the analysis you can generate HTTP 500 errors and high latency to test Flagger's rollback. Exec into the load tester pod with: ``` kubectl -n test exec -it flagger-loadtester-xx-xx sh``` Generate HTTP 500 errors: ``` watch curl http://podinfo-canary.test:9898/status/500``` Generate latency: ``` watch curl http://podinfo-canary.test:9898/delay/1``` When the number of failed checks reaches the analysis threshold, the green version is scaled to zero and the rollout is marked as failed. ``` kubectl -n test describe canary/podinfo Status: Failed Checks: 2 Phase: Failed Events: Type Reason Age From Message - - - Normal Synced 3m flagger New revision detected podinfo.test Normal Synced 3m flagger Advance podinfo.test canary iteration 1/10 Normal Synced 3m flagger Advance podinfo.test canary iteration 2/10 Normal Synced 3m flagger Advance podinfo.test canary iteration 3/10 Normal Synced 3m flagger Halt podinfo.test advancement success rate 69.17% < 99% Normal Synced 2m flagger Halt podinfo.test advancement success rate 61.39% < 99% Warning Synced 2m flagger Rolling back podinfo.test failed checks threshold reached 2 Warning Synced 1m flagger Canary failed! Scaling down podinfo.test``` The analysis can be extended with Prometheus queries. The demo app is instrumented with Prometheus so you can create a custom check that will use the HTTP request duration histogram to validate the canary (green version). Create a metric template and apply it on the cluster: ``` apiVersion: flagger.app/v1beta1 kind: MetricTemplate metadata: name: not-found-percentage namespace: test spec: provider: type: prometheus address: http://flagger-prometheus.flagger:9090 query: | 100 - sum( rate( httprequestdurationsecondscount{ kubernetes_namespace=\"{{ namespace }}\", kubernetespodname=~\"{{ target }}-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)\" status!=\"{{ interval }}\" }[1m] ) ) / sum( rate( httprequestdurationsecondscount{ kubernetes_namespace=\"{{ namespace }}\", kubernetespodname=~\"{{ target }}-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)\" }[{{ interval }}] ) ) * 100``` Edit the canary analysis and add the following metric: ``` analysis: metrics: name: \"404s percentage\" templateRef: name: not-found-percentage thresholdRange: max: 5 interval: 1m``` The above configuration validates the canary (green version) by checking if the HTTP 404 req/sec percentage is below 5 percent of the total traffic. If the 404s rate reaches the 5% threshold, then the rollout is rolled back. Trigger a deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.3``` Generate 404s: ``` watch curl http://podinfo-canary.test:9898/status/400``` Watch Flagger logs: ``` kubectl -n flagger logs deployment/flagger -f | jq .msg New revision detected podinfo.test Scaling up podinfo.test Advance podinfo.test canary iteration 1/10 Halt podinfo.test advancement 404s percentage 6.20 > 5 Halt podinfo.test advancement 404s percentage 6.45 > 5 Rolling back podinfo.test failed checks threshold reached 2 Canary failed! Scaling down podinfo.test``` If you have alerting configured, Flagger will send a notification with the reason why the canary failed. Flagger comes with a testing service that can run Helm tests when configured as a pre-rollout webhook. Deploy the Helm test runner in the kube-system namespace using the tiller service account: ``` helm repo add flagger https://flagger.app helm upgrade -i flagger-helmtester flagger/loadtester \\ --namespace=kube-system \\ --set serviceAccountName=tiller``` When deployed the Helm tester API will be available at http://flagger-helmtester.kube-system/. Add a helm test pre-rollout hook to your chart: ``` analysis: webhooks: name: \"conformance testing\" type: pre-rollout url: http://flagger-helmtester.kube-system/ timeout: 3m metadata: type: \"helm\" cmd: \"test {{ .Release.Name }} --cleanup\"``` When the canary analysis starts, Flagger will call the pre-rollout webhooks. If the helm test fails, Flagger will retry until the analysis threshold is reached and the canary is rolled back. For an in-depth look at the analysis" } ]
{ "category": "App Definition and Development", "file_name": "webhooks.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to use App Mesh and Flagger to automate canary deployments. You'll need an EKS cluster (Kubernetes >= 1.16) configured with App Mesh, you can find the installation guide here. Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services, App Mesh virtual nodes and services). These objects expose the application on the mesh and drive the canary analysis and promotion. The only App Mesh object you need to create by yourself is the mesh resource. Create a mesh called global: ``` cat << EOF | kubectl apply -f - apiVersion: appmesh.k8s.aws/v1beta2 kind: Mesh metadata: name: global spec: namespaceSelector: matchLabels: appmesh.k8s.aws/sidecarInjectorWebhook: enabled EOF``` Create a test namespace with App Mesh sidecar injection enabled: ``` cat << EOF | kubectl apply -f - apiVersion: v1 kind: Namespace metadata: name: test labels: appmesh.k8s.aws/sidecarInjectorWebhook: enabled EOF``` Create a deployment and a horizontal pod autoscaler: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main``` Deploy the load testing service to generate traffic during the canary analysis: ``` helm upgrade -i flagger-loadtester flagger/loadtester \\ --namespace=test \\ --set appmesh.enabled=true \\ --set \"appmesh.backends[0]=podinfo\" \\ --set \"appmesh.backends[1]=podinfo-canary\"``` Create a canary definition: ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: annotations: appmesh.flagger.app/accesslog: enabled name: podinfo namespace: test spec: provider: appmesh:v1beta2 targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo progressDeadlineSeconds: 60 autoscalerRef: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler name: podinfo service: port: 9898 timeout: 15s retries: attempts: 3 perTryTimeout: 5s retryOn: \"gateway-error,client-error,stream-error\" match: uri: prefix: / rewrite: uri: / analysis: interval: 1m threshold: 5 maxWeight: 50 stepWeight: 5 metrics: name: request-success-rate thresholdRange: min: 99 interval: 1m name: request-duration thresholdRange: max: 500 interval: 30s webhooks: name: acceptance-test type: pre-rollout url: http://flagger-loadtester.test/ timeout: 30s metadata: type: bash cmd: \"curl -sd 'test' http://podinfo-canary.test:9898/token | grep token\" name: load-test url: http://flagger-loadtester.test/ timeout: 5s metadata: cmd: \"hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/\"``` Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo horizontalpodautoscaler.autoscaling/podinfo canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary virtualnode.appmesh.k8s.aws/podinfo-canary virtualnode.appmesh.k8s.aws/podinfo-primary virtualrouter.appmesh.k8s.aws/podinfo virtualrouter.appmesh.k8s.aws/podinfo-canary virtualservice.appmesh.k8s.aws/podinfo virtualservice.appmesh.k8s.aws/podinfo-canary``` After the bootstrap, the podinfo deployment will be scaled to zero and the traffic to podinfo.test will be routed to the primary pods. During the canary analysis, the podinfo-canary.test address can be used to target directly the canary pods. App Mesh blocks all egress traffic by default. If your application needs to call another service, you have to create an App Mesh virtual service for it and add the virtual service name to the backend list. ``` service: port: 9898 backends: backend1 arn:aws:appmesh:eu-west-1:12345678910:mesh/my-mesh/virtualService/backend2``` In order to expose the podinfo app outside the mesh you can use the App Mesh" }, { "data": "Deploy the App Mesh Gateway behind an AWS NLB: ``` helm upgrade -i appmesh-gateway eks/appmesh-gateway \\ --namespace test``` Find the gateway public address: ``` export URL=\"http://$(kubectl -n test get svc/appmesh-gateway -ojson | jq -r \".status.loadBalancer.ingress[].hostname\")\" echo $URL``` Wait for the NLB to become active: ``` watch curl -sS $URL``` Create a gateway route that points to the podinfo virtual service: ``` cat << EOF | kubectl apply -f - apiVersion: appmesh.k8s.aws/v1beta2 kind: GatewayRoute metadata: name: podinfo namespace: test spec: httpRoute: match: prefix: \"/\" action: target: virtualService: virtualServiceRef: name: podinfo EOF``` Open your browser and navigate to the ingress address to access podinfo UI. A canary deployment is triggered by changes in any of the following objects: Deployment PodSpec (container image, command, ports, env, resources, etc) ConfigMaps and Secrets mounted as volumes or mapped to environment variables Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.1``` Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Status: Canary Weight: 0 Failed Checks: 0 Phase: Succeeded Events: New revision detected! Scaling up podinfo.test Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Advance podinfo.test canary weight 20 Advance podinfo.test canary weight 25 Advance podinfo.test canary weight 30 Advance podinfo.test canary weight 35 Advance podinfo.test canary weight 40 Advance podinfo.test canary weight 45 Advance podinfo.test canary weight 50 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Routing all traffic to primary Promotion completed! Scaling down podinfo.test``` When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. During the analysis the canarys progress can be monitored with Grafana. The App Mesh dashboard URL is http://localhost:3000/d/flagger-appmesh/appmesh-canary?refresh=10s&orgId=1&var-namespace=test&var-primary=podinfo-primary&var-canary=podinfo. You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT test podinfo Progressing 15 prod frontend Succeeded 0 prod backend Failed 0``` If youve enabled the Slack notifications, you should receive the following messages: During the canary analysis you can generate HTTP 500 errors or high latency to test if Flagger pauses the rollout. Trigger a canary deployment: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.2``` Exec into the load tester pod with: ``` kubectl -n test exec -it deploy/flagger-loadtester bash``` Generate HTTP 500 errors: ``` hey -z 1m -c 5 -q 5 http://podinfo-canary.test:9898/status/500``` Generate latency: ``` watch -n 1 curl" }, { "data": "When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. ``` kubectl -n appmesh-system logs deploy/flagger -f | jq .msg New revision detected! progressing canary analysis for podinfo.test Pre-rollout check acceptance-test passed Advance podinfo.test canary weight 5 Advance podinfo.test canary weight 10 Advance podinfo.test canary weight 15 Halt podinfo.test advancement success rate 69.17% < 99% Halt podinfo.test advancement success rate 61.39% < 99% Halt podinfo.test advancement success rate 55.06% < 99% Halt podinfo.test advancement request duration 1.20s > 0.5s Halt podinfo.test advancement request duration 1.45s > 0.5s Rolling back podinfo.test failed checks threshold reached 5 Canary failed! Scaling down podinfo.test``` If youve enabled the Slack notifications, youll receive a message if the progress deadline is exceeded, or if the analysis reached the maximum number of failed checks: Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for frontend applications that require session affinity. Edit the canary analysis, remove the max/step weight and add the match conditions and iterations: ``` analysis: interval: 1m threshold: 5 iterations: 10 match: headers: x-canary: exact: \"insider\" webhooks: name: load-test url: http://flagger-loadtester.test/ metadata: cmd: \"hey -z 1m -q 10 -c 2 -H 'X-Canary: insider' http://podinfo.test:9898/\"``` The above configuration will run an analysis for ten minutes targeting users that have a X-Canary: insider header. You can also use a HTTP cookie, to target all users with a canary cookie set to insider the match condition should be: ``` match: headers: cookie: regex: \"^(.?;)?(canary=insider)(;.)?$\" webhooks: name: load-test url: http://flagger-loadtester.test/ metadata: cmd: \"hey -z 1m -q 10 -c 2 -H 'Cookie: canary=insider' http://podinfo.test:9898/\"``` Trigger a canary deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.3``` Flagger detects that the deployment revision changed and starts the A/B test: ``` kubectl -n appmesh-system logs deploy/flagger -f | jq .msg New revision detected! progressing canary analysis for podinfo.test Advance podinfo.test canary iteration 1/10 Advance podinfo.test canary iteration 2/10 Advance podinfo.test canary iteration 3/10 Advance podinfo.test canary iteration 4/10 Advance podinfo.test canary iteration 5/10 Advance podinfo.test canary iteration 6/10 Advance podinfo.test canary iteration 7/10 Advance podinfo.test canary iteration 8/10 Advance podinfo.test canary iteration 9/10 Advance podinfo.test canary iteration 10/10 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Routing all traffic to primary Promotion completed! Scaling down podinfo.test``` The above procedure can be extended with custom metrics checks, webhooks, manual promotion approval and Slack or MS Teams notifications. Last updated 8 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "zero-downtime-deployments.md", "project_name": "Flagger", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide shows you how to use Flagger with KEDA ScaledObjects to autoscale workloads during a Canary analysis run. We will be using a Blue/Green deployment strategy with the Kubernetes provider for the sake of this tutorial, but you can use any deployment strategy combined with any supported provider. Flagger requires a Kubernetes cluster v1.16 or newer. For this tutorial, we'll need KEDA 2.7.1 or newer. Install KEDA: ``` helm repo add kedacore https://kedacore.github.io/charts kubectl create namespace keda helm install keda kedacore/keda --namespace keda``` Install Flagger: ``` helm repo add flagger https://flagger.app helm upgrade -i flagger flagger/flagger \\ --namespace flagger \\ --set prometheus.install=true \\ --set meshProvider=kubernetes``` Flagger takes a Kubernetes deployment and a KEDA ScaledObject targeting the deployment. It then creates a series of objects (Kubernetes deployments, ClusterIP services and another KEDA ScaledObject targeting the created Deployment). These objects expose the application inside the mesh and drive the Canary analysis and Blue/Green promotion. Create a test namespace: ``` kubectl create ns test``` Create a deployment named podinfo: ``` kubectl apply -n test -f https://raw.githubusercontent.com/fluxcd/flagger/main/kustomize/podinfo/deployment.yaml``` Deploy the load testing service to generate traffic during the analysis: ``` kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main``` Create a ScaledObject which targets the podinfo deployment and uses Prometheus as a trigger: ``` apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: podinfo-so namespace: test spec: scaleTargetRef: name: podinfo pollingInterval: 10 cooldownPeriod: 20 minReplicaCount: 1 maxReplicaCount: 3 triggers: type: prometheus metadata: name: prom-trigger serverAddress: http://flagger-prometheus.flagger-system:9090 metricName: httprequeststotal query: sum(rate(httprequeststotal{ app=\"podinfo\" }[30s])) threshold: '5'``` Create a canary custom resource for the podinfo deployment: ``` apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: podinfo namespace: test spec: provider: kubernetes targetRef: apiVersion: apps/v1 kind: Deployment name: podinfo autoscalerRef: apiVersion: keda.sh/v1alpha1 kind: ScaledObject name: podinfo-so primaryScalerQueries: prom-trigger: sum(rate(httprequeststotal{ app=\"podinfo-primary\" }[30s])) primaryScalerReplicas: minReplicas: 2 maxReplicas: 5 progressDeadlineSeconds: 60 service: port: 80 targetPort: 9898 name: podinfo-svc portDiscovery: true analysis: interval: 15s threshold: 5 iterations: 5 metrics: name: request-success-rate interval: 1m thresholdRange: min: 99 name: request-duration interval: 30s thresholdRange: max: 500 webhooks: name: load-test url: http://flagger-loadtester.test/ timeout: 5s metadata: type: cmd cmd: \"hey -z 2m -q 20 -c 2 http://podinfo-svc-canary.test/\"``` Save the above resource as podinfo-canary.yaml and then apply it: ``` kubectl apply -f ./podinfo-canary.yaml``` After a couple of seconds Flagger will create the canary objects: ``` deployment.apps/podinfo scaledobject.keda.sh/podinfo-so canary.flagger.app/podinfo deployment.apps/podinfo-primary horizontalpodautoscaler.autoscaling/podinfo-primary service/podinfo service/podinfo-canary service/podinfo-primary scaledobject.keda.sh/podinfo-so-primary``` We refer to our ScaledObject for the canary deployment using .spec.autoscalerRef. Flagger will use this to generate a ScaledObject which will scale the primary deployment. By default, Flagger will try to guess the query to use for the primary ScaledObject, by replacing all mentions of .spec.targetRef.Name and {.spec.targetRef.Name}-canary with {.spec.targetRef.Name}-primary, for all" }, { "data": "For eg, if your ScaledObject has a trigger query defined as: sum(rate(httprequeststotal{ app=\"podinfo\" }[30s])) or sum(rate(httprequeststotal{ app=\"podinfo-primary\" }[30s])), then the primary ScaledObject will have the same trigger with a query defined as sum(rate(httprequeststotal{ app=\"podinfo-primary\" }[30s])). If, the generated query does not meet your requirements, you can specify the query for autoscaling the primary deployment explicitly using .spec.autoscalerRef.primaryScalerQueries, which lets you define a query for each trigger. Please note that, your ScaledObject's .spec.triggers[@].name must not be blank, as Flagger needs that to identify each trigger uniquely. In the situation when it is desired to have different scaling replica configuration between the canary and primary deployment ScaledObject you can use the .spec.autoscalerRef.primaryScalerReplicas to override these values for the generated primary ScaledObject. After the boostrap, the podinfo deployment will be scaled to zero and the traffic to podinfo.test will be routed to the primary pods. To keep the podinfo deployment at 0 replicas and pause auto scaling, Flagger will add an annotation to your ScaledObject: autoscaling.keda.sh/paused-replicas: 0. During the canary analysis, the annotation is removed, to enable auto scaling for the podinfo deployment. The podinfo-canary.test address can be used to target directly the canary pods. When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The Blue/Green deployment will run for five iterations while validating the HTTP metrics and rollout hooks every 15 seconds. Trigger a deployment by updating the container image: ``` kubectl -n test set image deployment/podinfo \\ podinfod=ghcr.io/stefanprodan/podinfo:6.0.1``` Flagger detects that the deployment revision changed and starts a new rollout: ``` kubectl -n test describe canary/podinfo Events: New revision detected podinfo.test Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available Pre-rollout check acceptance-test passed Advance podinfo.test canary iteration 1/10 Advance podinfo.test canary iteration 2/10 Advance podinfo.test canary iteration 3/10 Advance podinfo.test canary iteration 4/10 Advance podinfo.test canary iteration 5/10 Advance podinfo.test canary iteration 6/10 Advance podinfo.test canary iteration 7/10 Advance podinfo.test canary iteration 8/10 Advance podinfo.test canary iteration 9/10 Advance podinfo.test canary iteration 10/10 Copying podinfo.test template spec to podinfo-primary.test Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available Promotion completed! Scaling down podinfo.test``` Note that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis. You can monitor all canaries with: ``` watch kubectl get canaries --all-namespaces NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME test podinfo Progressing 100 2019-06-16T14:05:07Z``` You can monitor the scaling of the deployments with: ``` watch kubectl -n test get deploy podinfo NAME READY UP-TO-DATE AVAILABLE AGE flagger-loadtester 1/1 1 1 4m21s podinfo 3/3 3 3 4m28s podinfo-primary 3/3 3 3 3m14s``` You can mointor how Flagger edits the annotations of your ScaledObject with: ``` watch \"kubectl get -n test scaledobjects podinfo-so -o=jsonpath='{.metadata.annotations}'\"``` Last updated 1 year ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Flux", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Flux is a set of continuous and progressive delivery solutions for Kubernetes that are open and extensible. The latest version of Flux brings many new features, making it more flexible and versatile. Flux is a CNCF Graduated project. Next event: 2024-05-14 22:00 UTC:The Flux Bug Scrub (Australia/AEST edition)Where: #flux on cncf slack 2024-05-14 22:00:00+00:00 Flux and Flagger deploy apps with canaries, feature flags, and A/B rollouts. Flux can also manage any Kubernetes resource. Infrastructure and workload dependency management is built in. Describe the entire desired state of your system in Git. This includes apps, configuration, dashboards, monitoring, and everything else. Use YAML to enforce conformance to the declared system. You dont need to run kubectl because all changes are synced automatically. Everything is controlled through pull requests. Your Git history provides a sequence of transactions, allowing you to recover state from any snapshot. Flux enables application deployment (CD) and (with the help of Flagger) progressive delivery (PD) through automatic reconciliation. Flux can even push back to Git for you with automated container image updates to Git (image scanning and patching). Flux works with your Git providers (GitHub, GitLab, Bitbucket, can even use s3-compatible buckets as a source), all major container registries, fully integrates with OCI and all CI workflow providers. Pull vs. Push, least amount of privileges, adherence to Kubernetes security policies and tight integration with security tools and best-practices. Read more about our security considerations. Flux uses true Kubernetes RBAC via impersonation and supports multiple Git repositories. Multi-cluster infrastructure and apps work out of the box with Cluster API: Flux can use one Kubernetes cluster to manage apps in either the same or other clusters, spin up additional clusters themselves, and manage clusters including lifecycle and fleets. Support for" }, { "data": "Kustomize, Helm; GitHub, GitLab, Harbor and custom webhooks; notifications on Slack and other chat systems; RBAC, and policy-driven validation (OPA, Kyverno, admission controllers). No matter if you use one of the Flux UIs or a hosted cloud offering from your cloud vendor, Flux has a thriving ecosystem of integrations and products built on top of it and all have great dashboards for you. Times shown are UTC. See this page for more events, more details, and subscription options. The Flux project aspires to be the vendor-neutral home for GitOps in a Cloud Native world. What we achieved up until today is only possible because of our community, that is very easy to work with. Join the conversation in GitHub Discussions. Everything Flux related ranging from specifications and feature planning to Show & Tell happens here. If you want to talk to the Flux team and community in real-time, join us on Slack. This is a great way to get to know everyone. Get a Slack invite, or go to the #flux channel. Join our (low-traffic) mailing list to stay up to day on announcements and sporadic discussions. Flux is a CNCF Graduated project and was categorised as Adopt on the CNCF CI/CD Tech Radar (alongside Helm). Some of the biggest organisations have adopted the Flux family of projects for their GitOps needs. See who is part of our community and how about joining yourself? If you are new to Flux, you might want to check out some of the following resources to get started. Find more on our dedicated resources page. We welcome contributors of any kind. The components of Flux are on Kubernetes core controller-runtime, so anyone can contribute and its functionality can be extended very easily. Flux is a Cloud Native Computing Foundation Graduated project The Linux Foundation (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see Trademark Usage." } ]
{ "category": "App Definition and Development", "file_name": "kubectl.docs.kubernetes.io.md", "project_name": "Flux", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Guides and API References for Kubectl and Kustomize. Kubectl is a Kubernetes CLI, which provides a Swiss Army knife of functionality for working with Kubernetes clusters. It can be used to deploy and manage applications on Kubernetes, and for scripting and building higher-level frameworks. Kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. Join the community on Slack Join us We do a Pull Request contributions workflow on GitHub. New users are always welcome! Contribute to Kubectl / Kustomize For announcement of latest features, etc. Follow us" } ]
{ "category": "App Definition and Development", "file_name": "about-billing-for-github-actions.md", "project_name": "GitHub Actions", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "github-privacy-statement.md", "project_name": "GitHub Actions", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Automate, customize, and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow. Whether you are new to GitHub Actions or interested in learning all they have to offer, this guide will help you use GitHub Actions to accelerate your application development workflows. Example workflows that demonstrate the CI/CD features of GitHub Actions. You can create custom continuous integration (CI) workflows directly in your GitHub repository with GitHub Actions. Learn how to control deployments with features like environments and concurrency. A workflow is a configurable automated process made up of one or more jobs. You must create a YAML file to define your workflow configuration. Whether you are new to GitHub Actions or interested in learning all they have to offer, this guide will help you use GitHub Actions to accelerate your application development workflows. Example workflows that demonstrate the CI/CD features of GitHub Actions. You can configure your workflows to run when specific activity on GitHub happens, at a scheduled time, or when an event outside of GitHub occurs. GitHub provides starter workflows for a variety of languages and tooling. You can publish Node.js packages to a registry as part of your continuous integration (CI) workflow. You can create a continuous integration (CI) workflow to build and test your PowerShell project. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "go.md", "project_name": "Gitness", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Let's discover Gitness in less than 30 seconds. Use the following Docker command to install Gitness. ``` docker run -d \\ -p 3000:3000 \\ -v /var/run/docker.sock:/var/run/docker.sock \\ -v /tmp/gitness:/data \\ --name gitness \\ --restart always \\ harness/gitness``` By default, Gitness stores data beneath the /data directory within the running container. Learn how to manage your Gitness data depending on your use case. Optionally, Gitness can import projects from external sources (such as GitLab groups or GitHub organizations). Optionally, Gitness can import repositories from external sources (such as GitLab or GitHub). Now that you've created a project and repository, you can:" } ]
{ "category": "App Definition and Development", "file_name": "rust.md", "project_name": "Gitness", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This guide covers configuring continuous integration pipelines for Rust projects. In the below example we demonstrate a pipeline that executes cargo build and cargo test commands. These commands are executed inside the rust Docker container, downloaded at runtime from DockerHub. ``` kind: pipelinespec: stages: - type: ci spec: steps: - name: test type: run spec: container: rust:1.30 script: |- cargo build --verbose --all cargo test --verbose --all``` Please note that you can use any Docker image in your pipeline from any Docker registry. You can use the official rust images, or your can bring your own. You can configure multiple, containerized steps to test against multiple versions of Rust. ``` kind: pipelinespec: stages: - type: ci spec: steps: - name: test 1.30 type: run spec: container: rust:1.30 script: |- cargo build --verbose --all cargo test --verbose --all - name: test 1.29 type: run spec: container: rust:1.29 script: |- cargo build --verbose --all cargo test --verbose --all```" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Google Cloud Build", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Get started using Google Cloud by trying one of our product quickstarts, tutorials, or interactive walkthroughs. Get started with Google Cloud quickstarts Whether you're looking to deploy a web app, set up a database, or run big data workloads, it can be challenging to get started. Luckily, Google Cloud quickstarts offer step-by-step tutorials that cover basic use cases, operating the Google Cloud console, and how to use the Google command-line tools." } ]
{ "category": "App Definition and Development", "file_name": "create-custom-build-steps.md", "project_name": "Google Cloud Build", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This page provides instructions on how you can secure image deployments to Cloud Run and Google Kubernetes Engine using Cloud Build. Learn how to configure Binary Authorization to check for build attestations and block deployments of images that are not generated by Cloud Build. This process can reduce the risk of deploying unauthorized software. Enable the Cloud Build, Binary Authorization, and Artifact Registry APIs. Enable the APIs To use the command-line examples in this guide, install and configure the Google Cloud SDK. Set up Binary Authorization for your platform. A policy in Binary Authorization is a set of rules that govern the deployment of images. You can configure a rule to require digitally signed attestations. Cloud Build generates and signs attestations at build time. With Binary Authorization, you can use the built-by-cloud-build attestor to verify the attestations and only deploy images built by Cloud Build. To create the built-by-cloud-build attestor in your project, run a build in that project. To allow only images built by Cloud Build to be deployed, perform the following steps: Go to the Binary Authorization page in the Google Cloud console: Go to Binary Authorization In the Policy tab, click Edit Policy. In the Edit Policy dialog, select Allow only images that have been approved by all of the following attestors. Click Add Attestors. In the Add attestors dialog box, do the following: Alternatively, select Add by attestor resource ID. In Attestor resource ID, enter ``` projects/PROJECT_ID/attestors/built-by-cloud-build ``` Replacing PROJECT_ID with the project where you run Cloud Build. Click Add 1 attestor. Click Save Policy. Export your existing policy to a file using the following command: ``` gcloud container binauthz policy export > /tmp/policy.yaml ``` Edit your policy file. Edit one of the following rules: Add a requireAttestationsBy block to the rule if there isn't one there already. In the requireAttestationsBy block, add ``` projects/PROJECT_ID/attestors/built-by-cloud-build ``` Replacing PROJECT_ID with the project where you run Cloud Build. Save the policy file. Import the policy file. ``` gcloud container binauthz policy import /tmp/policy.yaml ``` The following is an example policy file that contains the reference to the built-by-cloud-build-attestor: ``` defaultAdmissionRule: evaluationMode: REQUIRE_ATTESTATION enforcementMode: ENFORCEDBLOCKANDAUDITLOG requireAttestationsBy: projects/PROJECT_ID/attestors/built-by-cloud-build name: projects/PROJECT_ID/policy ``` Replace PROJECT_ID with the project ID where you run Cloud Build. You can view policy errors in the Binary Authorization log messages for GKE or Cloud Run In dry-run mode, Binary Authorization checks policy compliance without actually blocking the deployment. Instead, policy compliance status messages are logged to Cloud Logging. You can use these logs to determine if your blocking policy is working correctly and to identify false positives. To enable dry run, do the following: Go to the Binary Authorization page in the Google Cloud" }, { "data": "Go to Binary Authorization. Click Edit Policy. In Default Rule or a specific rule, select Dry-run mode. Click Save Policy. Export the Binary Authorization policy to a YAML file: ``` gcloud container binauthz policy export > /tmp/policy.yaml ``` In a text editor, set enforcementMode to DRYRUNAUDITLOG_ONLY and save the file. To update the policy, import the file by executing the following command: ``` gcloud container binauthz policy import /tmp/policy.yaml ``` You can view policy errors in the Binary Authorization log messages for GKE or Cloud Run Cloud Build and Binary Authorization must be in the same project. If you run your deployment platform in another project, configure IAM roles for a multi-project setup, and refer to the Cloud Build project when adding the built-by-cloud-build attestor in Binary Authorization. Cloud Build does not generate attestations when you push images to Artifact Registry using an explicit docker push build step. Make sure you push to Artifact Registry using the images field in your docker build build step. For more information on images, see Different ways of storing images in Artifact Registry. You must use separate build config files for your build pipeline and deployment pipeline. This is because Cloud Build produces attestations only after the build pipeline completes successfully. Binary Authorization will then check the attestation before deploying the image. By default, Cloud Build does not generate Binary Authorization attestations for builds in private pools. To generate attestations, add the requestedVerifyOption: VERIFIED option to your build configuration file: ``` steps: name: 'gcr.io/cloud-builders/docker' args: [ 'build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1', '.' ] images: 'us-central1-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1' options: requestedVerifyOption: VERIFIED ``` After adding the requestedVerifyOption, Cloud Build enables attestation generation and provenance metadata for your image. An attestor is created the first time you run a build in a project. The attestor ID is of the form projects/PROJECT_ID/attestors/built-by-cloud-build, where PROJECT_ID is your project ID. You can check the build attestor metadata using the following command: ``` curl -X GET -H \"Content-Type: application/json\" \\ -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\ https://binaryauthorization.googleapis.com/v1beta1/projects/PROJECT_ID/attestors/built-by-cloud-build ``` Replace PROJECT_ID with the project where you run Cloud Build. The output contains information about the attestor and the corresponding public keys. For example: ``` name\": \"projects/PROJECT_ID/attestors/built-by-cloud-build\", \"userOwnedDrydockNote\": { \"noteReference\": \"projects/PROJECT_ID/notes/built-by-cloud-build\", \"publicKeys\": [ { \"id\": \"//cloudkms.googleapis.com/v1/projects/verified-builder/locations/asia/keyRings/attestor/cryptoKeys/builtByGCB/cryptoKeyVersions/1\", \"pkixPublicKey\": { \"publicKeyPem\": \"--BEGIN PUBLIC KEY--\\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEMMvFxZLgIiWOLIXsaTkjTmOKcaK7\\neIZrgpWHpHziTFGg8qyEI4S8O2/2wh1Eru7+sj0Sh1QxytN/KE5j3mTvYA==\\n--END PUBLIC KEY--\\n\", \"signatureAlgorithm\": \"ECDSAP256SHA256\" } }, ... } ], \"delegationServiceAccountEmail\": \"service-942118413832@gcp-binaryauthorization.iam.gserviceaccount.com\" }, \"updateTime\": \"2021-09-24T15:26:44.808914Z\", \"description\": \"Attestor autogenerated by build ID fab07092-30f4-4f70-caf7-4545cbc404d6\" ``` Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "speeding-up-builds.md", "project_name": "Google Cloud Build", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This page provides best practices for speeding up Cloud Build builds. When you containerize an application, files that are not needed at runtime, such as build-time dependencies and intermediate files, can be inadvertently included in the container image. These unneeded files can increase the size of the container image and add extra time and cost as the image moves between your container image registry and your container runtime. To help reduce the size of your container image, separate the building of the application, along with the tools used to build it, from the assembly of the runtime container. For more information, see Building leaner containers. Kaniko cache is a Cloud Build feature that caches container build artifacts by storing and indexing intermediate layers within a container image registry, such as Google's own Container Registry, where they are available for use by subsequent builds. For more information, see Using Kaniko cache. The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can specify the cached image by adding the --cache-from argument in your build config file, which will instruct Docker to build using that image as a cache source. Each Docker image is made up of stacked layers. Using --cache-from rebuilds all the layers from the changed layer until the end of the build; therefore using --cache-from is not beneficial if you change a layer in the earlier stages of your Docker build. It is recommended that you always use --cache-from for your builds, but keep the following caveats in mind: The following steps explain how to build using a previously cached image: In your build config, add instructions to: Add a --cache-from argument to use that image for rebuilds. ``` steps: name: 'gcr.io/cloud-builders/docker' entrypoint: 'bash' args: ['-c', 'docker pull gcr.io/$PROJECTID/[IMAGENAME]:latest || exit 0'] name: 'gcr.io/cloud-builders/docker' args: [ 'build', '-t', 'gcr.io/$PROJECTID/[IMAGENAME]:latest', '--cache-from', 'gcr.io/$PROJECTID/[IMAGENAME]:latest', '.' ] images: ['gcr.io/$PROJECTID/[IMAGENAME]:latest'] ``` where [IMAGE_NAME] is the name of your image. Build your image using the above build config: ``` gcloud builds submit --config cloudbuild.yaml . ``` In your build config, add instructions to: Add a --cache-from argument to use that image for rebuilds. ``` { \"steps\": [ { \"name\": \"gcr.io/cloud-builders/docker\", \"entrypoint\": \"bash\", \"args\": [\"-c\", \"docker pull gcr.io/$PROJECTID/[IMAGENAME]:latest || exit 0\"] }, { \"name\": \"gcr.io/cloud-builders/docker\", \"args\": [ \"build\", \"-t\", \"gcr.io/$PROJECTID/[IMAGENAME]:latest\", \"--cache-from\", \"gcr.io/$PROJECTID/[IMAGENAME]:latest\"," }, { "data": "] } ], \"images\": [\"gcr.io/$PROJECTID/[IMAGENAME]:latest\"] } ``` where [IMAGE_NAME] is the name of your image. Build your image using the above build config: ``` gcloud builds submit --config cloudbuild.json . ``` To increase the speed of a build, reuse the results from a previous build. You can copy the results of a previous build to a Google Cloud Storage bucket, use the results for faster calculation, and then copy the new results back to the bucket. Use this method when your build takes a long time and produces a small number of files that does not take time to copy to and from Google Cloud Storage. Unlike --cache-from, which is only for Docker builds, Google Cloud Storage caching can be used for any builder supported by Cloud Build. Use the following steps to cache directories using Google Cloud Storage: In your build config file, add instructions to: Copy the new results back into the bucket. ``` steps: name: gcr.io/cloud-builders/gsutil args: ['cp', 'gs://mybucket/results.zip', 'previous_results.zip'] name: gcr.io/cloud-builders/gsutil args: ['cp', 'new_results.zip', 'gs://mybucket/results.zip'] ``` Build your code using the above build config: ``` gcloud builds submit --config cloudbuild.yaml . ``` In your build config file, add instructions to: Copy the new results back into the bucket. ``` { \"steps\": [ { \"name\": \"gcr.io/cloud-builders/gsutil\", \"args\": [\"cp\", \"gs://mybucket/results.zip\", \"previous_results.zip\"] }, { // operations that use previousresults.zip and produce newresults.zip }, { \"name\": \"gcr.io/cloud-builders/gsutil\", \"args\": [\"cp\", \"new_results.zip\", \"gs://mybucket/results.zip\"] } ] } ``` Build your code using the above build config: ``` gcloud builds submit --config cloudbuild.json . ``` When a build is triggered, your code directory is uploaded for use by Cloud Build. You can exclude files not needed by your build with a .gcloudignore file to optimize the upload time. Examples of commonly excluded files include: To prepare a .gcloudignore file to address these cases, create a file in your project root with contents such as: ``` .git dist node_modules vendor *.jar ``` Excluding compiled code and third-party dependencies also results in a more consistent build process and a reduced risk of accidental deployment of code that is still under active development. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "JenkinsX", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Jenkins X provides automated CI+CD for Kubernetes with Preview Environments on Pull Requests using Tekton as the underlying pipeline engine. Provisioning your Kubernetes cluster is easy. Jenkins X does not really care how you provision your cluster, however there are many resources that are provisioned, so we recommend using the Terraform modules we've made available. Prerequisite for using them is an installed terraform binary. You can find the installation instructions here. Create a main.tf file in an empty directory and add the code snippet below. You will need to provide the GCP project id in which to install the cluster. ``` module \"jx\" { source = \"jenkins-x/jx/google\" gcp_project = \"<my-gcp-project-id>\" } output \"jx_requirements\" { value = module.jx.jx_requirements }``` This setup uses defaults parameters. For more details instructions on how to provision a Kubernetes cluster with the GKE Terraform module refer to the GKE terraform module. Provision the cluster using Terraform. ``` terraform init terraform apply``` Create a main.tf file in an empty directory and add the code snippet below. ``` module \"eks-jx\" { source = \"jenkins-x/eks-jx/aws\" } output \"jx_requirements\" { value = module.eks-jx.jx_requirements } output \"vaultuserid\" { value = module.eks-jx.vaultuserid description = \"The Vault IAM user id\" } output \"vaultusersecret\" { value = module.eks-jx.vaultusersecret description = \"The Vault IAM user secret\" }``` This setup uses defaults parameters. For more details instructions on how to provision a Kubernetes cluster with the EKS Terraform module refer to the EKS terraform module. Provision the cluster using Terraform. ``` terraform init terraform apply export VAULTAWSACCESSKEYID=$(terraform output vaultuserid) export VAULTAWSSECRETACCESSKEY=$(terraform output vaultusersecret)``` The next step is installing the Jenkins X binary jx for your operating system. ``` brew install jenkins-x/jx/jx``` ``` curl -L \"https://github.com/jenkins-x/jx/releases/download/$(curl --silent https://api.github.com/repos/jenkins-x/jx/releases/latest | jq -r '.tag_name')/jx-darwin-amd64.tar.gz\" | tar xzv \"jx\" sudo mv jx /usr/local/bin jx version --short``` ``` curl -LO https://github.com/jenkins-x/jx/releases/download/latest/jx-linux-amd64.tar.gz -LO https://github.com/jenkins-x/jx/releases/download/latest/jx-linux-amd64.tar.gz.sig cosign verify-blob --key https://raw.githubusercontent.com/jenkins-x/jx/main/jx.pub --signature jx-linux-amd64.tar.gz.sig jx-linux-amd64.tar.gz tar -zxvf jx-linux-amd64.tar.gz jx version --short``` ``` @\"%SystemRoot%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoProfile ` -InputFormat None -ExecutionPolicy Bypass ` -Command \"iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))\" ` && SET \"PATH=%PATH%;%ALLUSERSPROFILE%\\chocolatey\\bin\"``` ``` choco install jenkins-x``` You have provisioned your Kubernetes cluster and installed the jx CLI. Now you are ready to install Jenkins X into the cluster. To do that, you will use the jx boot command. For more details around JX Boot refer to the Run Boot section. ``` terraform output jxrequirements > <someempty_dir>/jx-requirements.yml cd <someemptydir> jx boot --requirements jx-requirements.yml``` Copyright 2024 The Jenkins X Authors. All Rights Reserved Privacy Policy Copyright 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page. Linux is a registered trademark of Linus Torvalds. Linux Foundation Privacy Policy and Terms of Use" } ]
{ "category": "App Definition and Development", "file_name": "docs_src=hash&ref_src=twsrc%5Etfw.md", "project_name": "k6", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot cannot remember your choice unless you click the consent notice at the bottom. I am Grot. Ask me anything Grafana Cloud k6 is a performance-testing application in your Grafana Cloud instance powered by k6" }, { "data": "k6 is an open source load testing tool designed for developers to allow teams to create tests-as-code, integrate performance tests as part of the software development lifecycle, and help users test, analyze, and fix performance issues in their applications. Grafana Cloud k6 lets you leverage all of k6 capabilities, while Grafana handles the infrastructure work of scaling servers, handling distributed load zones, and storing and aggregating metrics from your tests. This documentation describes everything you need to know to author, run, analyze, and manage load tests in Grafana Cloud k6. For a high-level picture of why you might want to use Grafana Cloud k6, continue reading this page. For a quick tutorial, go to Get started to run your first test. Dashboards and performance tests share a common purpose: increased system reliability. However, these two tools focus on different aspects of reliability. Performance tests try to create conditions to uncover issues, and dashboards monitor systems so that, when issues do occur, theyre visible. Grafana Cloud k6 joins two complementary tools in the same platform. The k6 app in Grafana Cloud also has complementary features to enhance k6 OSS. A holistic approach to reliability brings several benefits: Compare client and system metrics in a single pane of glass. With k6, you can write load tests to catch performance issues before they enter production. With Grafana, you can visualize and compare test results to other system metrics. Together, the tools help you see both sides of what happens when your system is under load. Bring teams and data together. Since the k6 app comes with Grafana Cloud, your testing team can share a platform with everyone who uses your Grafana stack. That opens new opportunities for collaboration, communication, and discovery. For example, engineering teams might correlate results with panels initially used only by DevOps teams. On the other end, since tests can be run directly from Grafana Cloud, you could write a test that other developers could re-use to test for regressions after each significant commit. Diagnose known failure conditions You can also use the k6 app with Grafana to diagnose why systems fail under certain conditions. Create a script that reproduces the behavior that you expect to cause failure, then run it with increasing load as you monitor your metrics. Tests in Grafana cloud k6 run on k6 OSS. k6 is an open source performance testing tool written in Go. Its designed to be full-featured, generate load efficiently, and, above all, provide excellent developer experience. A few of k6 major features are: In almost all cases, you can run the same tests locally or in Grafana Cloud. Rather than lock you in, Grafana Cloud k6 enhances and complements your local performance tests. When you run tests on Grafana Cloud k6 servers, Grafana handles the following infrastructure work for you: For a glimpse of the development work that managing this infrastructure requires, read Peeking under the hood of k6 Cloud. Besides visualization and infrastructure, Grafana Cloud k6 enhances the command-line application with graphical interfaces to build, analyze, and manage tests. That includes: There are a few different paths you can choose to create your first performance test:" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Keploy", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Last updated: April 28th, 2024 Keploy Inc. operates https://keploy.io and https://docs.keploy.io (the \"Sites\"). Please read this Privacy Policy carefully to understand our policies and practices in relation to the collection, processing, and storage of your personal information. By accessing or using our Services (defined below), you understand that we will process your personal information as described in this Privacy Policy. If you do not agree with or are not comfortable with any aspect of this Privacy Policy, you should immediately discontinue access or use of our Services. We use your Personal Information only for providing and improving the Sites. By using the Sites, you agree to the collection and use of information in accordance with this policy. As you navigate through and interact with our Website, we may use automatic data collection technologies to collect certain information about your equipment, browsing actions, and patterns, including: (a) details of your visits to our Website, including traffic data, location data, logs, and other communication data and the resources that you access and use our Services; and (b) information about your computer and internet connection, including your IP address, operating system, and browser type. The information we collect automatically helps us to improve our Services and to deliver a better and more personalized service, including by enabling us to: (i) estimate our audience size and usage patterns; (ii) store information about your preferences, allowing us to customize our Services according to your individual interests; (iii) speed up your searches; and (iv) recognize you when you return to our Services. For more information about the cookies and tracking technologies we may use please review the Cookie section below. (\"Personal Information\"). Like many site operators, we collect information that your browser sends whenever you visit our Sites (\"Log Data\"). This Log Data may include information such as your computer's Internet Protocol (\"IP\") address, browser type, browser version, the pages of our Sites that you visit, the time and date of your visit, the time spent on those pages and other statistics. In addition, we may use third party services such as Google Analytics that collect, monitor and analyze this" }, { "data": "Depending on the method of collection, we may use your Personal Information to contact you with newsletters, marketing or promotional materials and other information that we believe is relevant for you. Cookies are files with a small amount of data, which may include an anonymous unique identifier. Cookies are sent to your browser from a web site and stored on your computer's hard drive. Like many sites, we use \"cookies\" to collect information. You can instruct your browser to refuse all cookies or to indicate when a cookie is being sent. However, if you do not accept cookies, you may not be able to use some portions of our Sites. The security of your Personal Information is important to us, but remember that no method of transmission over the Internet, or method of electronic storage, is 100% secure. While we strive to use commercially acceptable means to protect your Personal Information, we cannot guarantee its absolute security. We have implemented appropriate technical and organizational measures designed to secure your information, or any data you, your employees, or agents provide to us pursuant to your role as an employee of another business (Business Data), from accidental loss and from unauthorized access, use, alteration, and disclosure. For more information on our security practices please refer to: keploy.io/security. The safety and security of your information depends on you. Where we have given you (or where you have chosen) a password for access to certain parts of our Services you are responsible for keeping this password confidential. We ask you not to share your password with anyone. This Privacy Policy is effective as of April 28th 2024 and will remain in effect except with respect to any changes in its provisions in the future, which will be in effect immediately after being posted on this page. We reserve the right to update or change our Privacy Policy at any time and you should check this Privacy Policy periodically. Your continued use of the Service after we post any modifications to the Privacy Policy on this page will constitute your acknowledgment of the modifications and your consent to abide and be bound by the modified Privacy Policy. If we make any material changes to this Privacy Policy, we will notify you either through the email address you have provided us, or by placing a prominent notice on our website. If you have any questions about this Privacy Policy, please contact us- hello@keploy.io." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Keploy", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This documentation is your roadmap to becoming a Keploy expert, whether you're a seasoned developer or just starting out. Keploy is your open-source, developer-centric backend testing tool. It makes backend testing easy and productive for engineering teams. Plus, it's easy-to-use, powerful and extensible.. Keploy creates test cases and data mocks/stubs from user-traffic by recording API calls and DB queries, significantly speeding up releases and enhancing reliability. Let's get Keploy up and running on your Windows, Linux, or macOS machine, so you can start crafting test cases in minutes. Windows Linux MacOS Please note that Keploy v2 is currently in development, with the best experience on Linux. Docker support is experimental and may have some limitations for certain use cases. Go Java Python Javascript Rust CSharp MongoDB HTTP PostgresSQL Redis MySQL DynamoDB Are you curious, or do you have questions burning in your mind? Look no further! Join our lively Community Forum where you can: Watch tutorials and meetups with Keploy users. Join our monthly meetup and ask questions! Give Keploy a star on GitHub (it helps!) Follow @keployio for Keploy news and events. Join for live conversations and get support. Explore blogs on API, Testing, Mocks and Keploy. We are happy to help you with your talks, blogposts (whether on our blog or yours) or anything else you want to try. Just get in touch! To request a Keploy Cloud account, please complete the request form. Our team will review your request and get back to you as soon as possible.Apply here!" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Liquibase", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "After you install Liquibase, get started with our tutorial and learn how Liquibase works. With Liquibase, SQL statements [1] are declared as Liquibasechangesets [2] in a changelog file [3]. Liquibase[4] then uses the changelog to apply the changesets to target databases [5]. In this tutorial, you will use an example changelog to apply two rounds of updates to an H2 database that is included with the Liquibase installation files. From a command line terminal, enter liquibase init start-h2 to start the example H2 database. The database console opens automatically in a browser on port 9090. The terminal includes the following output: ``` ... Opening Database Console in Browser... Dev Web URL: http://192.168.56.1:8090/frame.jsp?jsessionid=d219f3d2012e078770943ef4c2cd0d11 Integration Web URL: http://192.168.56.1:8090/frame.jsp?jsessionid=d7ab638787c99dbfe9c8103883bee278``` ``` cd <your path>/examples/sql liquibase update``` Liquibase displays the following output: ``` Running Changeset: example-changelog.sql::1::your.name Running Changeset: example-changelog.sql::2::your.name Running Changeset: example-changelog.sql::3::your.name Liquibase command 'update' was executed successfully.``` Liquibase applies the following updates, which are specified as Liquibasechangesets in example-changelog.sql: ``` --changeset your.name:1 create table person ( id int primary key, name varchar(50) not null, address1 varchar(50), address2 varchar(50), city varchar(30) ) --changeset your.name:2 create table company ( id int primary key, name varchar(50) not null, address1 varchar(50), address2 varchar(50), city varchar(30) ) --changeset other.dev:3 alter table person add column country varchar(2)``` The author-id value pairs your.name:1 and your.name:2 prevent their respective changesets from accidentally being run multiple times as new changesets are added to the changelog for subsequent updates. For more information, see Changelog. Using a text editor, open <your path>/examples/sql/example-changelog.sql and add the following changeset to the end of the file: ``` --changeset your.name:4 ALTER TABLE person ADD nickname varchar(30);``` Enter the following command: ``` liquibase update``` Liquibase displays the following output: ``` Running Changeset: example-changelog.sql::4::your.name Liquibase command 'update' was executed successfully.``` 2024 Liquibase Inc. All Rights Reserved. Liquibase is a registered trademark of Liquibase Inc. (737) 402-7187" } ]
{ "category": "App Definition and Development", "file_name": "flow.html.md", "project_name": "Liquibase", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "flow lets you run a series of commands contained in one or more stages, as configured in a Liquibase flow file. This command is available in Liquibase 4.15.0 and later. Liquibase flow files let you create portable, platform-independent Liquibase workflows that can run anywhere without modification. This includes Jenkins, GitHub actions, a developers desktop, or any other CI/CD support tool. You can use the flow command to run your flow file and execute many other commands all in one place. For more information, see Liquibase Flow Files. To run the flow command, specify the driver, classpath, and URL in Liquibase properties file. For more information, see Create and Configure a liquibase.properties File. You can also specify these properties in your command line. Then run the flow command: ``` liquibase flow``` | Attribute | Definition | Requirement | |:--|:-|:--| | --license-key=<string> | Your Liquibase Pro license key | Required | --license-key=<string> Your Liquibase Pro license key | Attribute | Definition | Requirement | |:|:--|:--| | --flow-file=<string> | The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. | Optional | | --flow-file-strict-parsing=<true|false> | If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. | Optional | | --flow-shell-interpreter=<string> | The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. | Optional | | --flow-shell-keep-temp-files=<true|false> | If true, do not delete temporary files created by the shell command execution. Default: false. | Optional | --flow-file=<string> The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. --flow-file-strict-parsing=<true|false> If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. --flow-shell-interpreter=<string> The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. --flow-shell-keep-temp-files=<true|false> If true, do not delete temporary files created by the shell command execution. Default: false. | Attribute | Definition | Requirement | |:-|:-|:--| | globalArgs: { license-key: \"<string>\" } | Your Liquibase Pro license key | Required | globalArgs: { license-key: \"<string>\" } Your Liquibase Pro license key | Attribute | Definition | Requirement | |:--|:--|:--| | cmdArgs: { flow-file: \"<string>\" } | The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. | Optional | | cmdArgs: { flow-file-strict-parsing: \"<true|false>\" } | If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. | Optional | | cmdArgs: { flow-shell-interpreter: \"<string>\" } | The default interpreter used to execute shell commands. Examples include bash, sh, and" }, { "data": "| Optional | | cmdArgs: { flow-shell-keep-temp-files: \"<true|false>\" } | If true, do not delete temporary files created by the shell command execution. Default: false. | Optional | cmdArgs: { flow-file: \"<string>\" } The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. cmdArgs: { flow-file-strict-parsing: \"<true|false>\" } If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. cmdArgs: { flow-shell-interpreter: \"<string>\" } The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. cmdArgs: { flow-shell-keep-temp-files: \"<true|false>\" } If true, do not delete temporary files created by the shell command execution. Default: false. | Attribute | Definition | Requirement | |:-|:-|:--| | liquibase.licenseKey: <string> | Your Liquibase Pro license key | Required | liquibase.licenseKey: <string> Your Liquibase Pro license key | Attribute | Definition | Requirement | |:|:--|:--| | liquibase.command.flowFile: <string> liquibase.command.<cmdName>.flowFile: <string> | The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. | Optional | | liquibase.command.flowFileStrictParsing: <true|false> liquibase.command.<cmdName>.flowFileStrictParsing: <true|false> | If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. | Optional | | liquibase.command.flowShellInterpreter: <string> liquibase.command.<cmdName>.flowShellInterpreter: <string> | The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. | Optional | | liquibase.command.flowShellKeepTempFiles: <true|false> liquibase.command.<cmdName>.flowShellKeepTempFiles: <true|false> | If true, do not delete temporary files created by the shell command execution. Default: false. | Optional | liquibase.command.flowFile: <string> liquibase.command.<cmdName>.flowFile: <string> The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. liquibase.command.flowFileStrictParsing: <true|false> liquibase.command.<cmdName>.flowFileStrictParsing: <true|false> If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. liquibase.command.flowShellInterpreter: <string> liquibase.command.<cmdName>.flowShellInterpreter: <string> The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. liquibase.command.flowShellKeepTempFiles: <true|false> liquibase.command.<cmdName>.flowShellKeepTempFiles: <true|false> If true, do not delete temporary files created by the shell command execution. Default: false. | Attribute | Definition | Requirement | |:|:-|:--| | JAVA_OPTS=-Dliquibase.licenseKey=<string> | Your Liquibase Pro license key | Required | JAVA_OPTS=-Dliquibase.licenseKey=<string> Your Liquibase Pro license key | Attribute | Definition | Requirement | |:-|:--|:--| | JAVAOPTS=-Dliquibase.command.flowFile=<string> JAVAOPTS=-Dliquibase.command.<cmdName>.flowFile=<string> | The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. | Optional | | JAVAOPTS=-Dliquibase.command.flowFileStrictParsing=<true|false> JAVAOPTS=-Dliquibase.command.<cmdName>.flowFileStrictParsing=<true|false> | If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. | Optional | | JAVAOPTS=-Dliquibase.command.flowShellInterpreter=<string> JAVAOPTS=-Dliquibase.command.<cmdName>.flowShellInterpreter=<string> | The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. | Optional | | JAVAOPTS=-Dliquibase.command.flowShellKeepTempFiles=<true|false> JAVAOPTS=-Dliquibase.command.<cmdName>.flowShellKeepTempFiles=<true|false> | If true, do not delete temporary files created by the shell command execution. Default: false. | Optional | JAVA_OPTS=-Dliquibase.command.flowFile=<string>" }, { "data": "The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. JAVA_OPTS=-Dliquibase.command.flowFileStrictParsing=<true|false> JAVA_OPTS=-Dliquibase.command.<cmdName>.flowFileStrictParsing=<true|false> If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. JAVA_OPTS=-Dliquibase.command.flowShellInterpreter=<string> JAVA_OPTS=-Dliquibase.command.<cmdName>.flowShellInterpreter=<string> The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. JAVA_OPTS=-Dliquibase.command.flowShellKeepTempFiles=<true|false> JAVA_OPTS=-Dliquibase.command.<cmdName>.flowShellKeepTempFiles=<true|false> If true, do not delete temporary files created by the shell command execution. Default: false. | Attribute | Definition | Requirement | |:-|:-|:--| | LIQUIBASELICENSEKEY=<string> | Your Liquibase Pro license key | Required | LIQUIBASELICENSEKEY=<string> Your Liquibase Pro license key | Attribute | Definition | Requirement | |:|:--|:--| | LIQUIBASECOMMANDFLOWFILE=<string> LIQUIBASECOMMAND<CMDNAME>FLOW_FILE=<string> | The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. | Optional | | LIQUIBASECOMMANDFLOWFILESTRICTPARSING=<true|false> LIQUIBASECOMMAND<CMDNAME>FLOWFILESTRICT_PARSING=<true|false> | If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. | Optional | | LIQUIBASECOMMANDFLOWSHELLINTERPRETER=<string> LIQUIBASECOMMAND<CMDNAME>FLOWSHELL_INTERPRETER=<string> | The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. | Optional | | LIQUIBASECOMMANDFLOWSHELLKEEPTEMPFILES=<true|false> LIQUIBASECOMMAND<CMDNAME>FLOWSHELLKEEPTEMP_FILES=<true|false> | If true, do not delete temporary files created by the shell command execution. Default: false. | Optional | LIQUIBASECOMMANDFLOW_FILE=<string> LIQUIBASECOMMAND<CMDNAME>FLOWFILE=<string> The path to the configuration YAML file which contains one or more \"stages\" of commands to be executed in a liquibase flow operation. Default: liquibase.flowfile.yaml. LIQUIBASECOMMANDFLOWFILESTRICT_PARSING=<true|false> LIQUIBASECOMMAND<CMDNAME>FLOWFILESTRICTPARSING=<true|false> If true, parse flow file YAML to allow only Liquibase flow file-specific properties, indentations, and structures. Default: true. LIQUIBASECOMMANDFLOWSHELLINTERPRETER=<string> LIQUIBASECOMMAND<CMDNAME>FLOWSHELL_INTERPRETER=<string> The default interpreter used to execute shell commands. Examples include bash, sh, and cmd. LIQUIBASECOMMANDFLOWSHELLKEEPTEMPFILES=<true|false> LIQUIBASECOMMAND<CMDNAME>FLOWSHELLKEEPTEMP_FILES=<true|false> If true, do not delete temporary files created by the shell command execution. Default: false. In this example, we are running the default flow file that comes pre-installed with Liquibase. ``` stages: Default: actions: type: liquibase command: checks run cmdArgs: {checks-scope: changelog} type: liquibase command: update type: liquibase command: checks run cmdArgs: {checks-scope: database} endStage: actions: type: liquibase command: history``` ``` Flow file liquibase.flowfile.yaml is valid. Executing Stage: Default Executing 'liquibase' checks run Executing Quality Checks against example-changelog.xml Executing all changelog checks because a valid license key was found! WARNING: No database checks were run. Make sure the checks-scope property includes \"database\" to run database checks. In the CLI set --checks-scope=\"changelog,database\" or set an environment variable LIQUIBASECOMMANDCHECKS_SCOPE=database. Learn more at https://docs.liquibase.com/quality-checks INFO: Checks executed against SQL generated by H2 at jdbc:h2:tcp://localhost:9090/mem:dev. Checks-settings File: liquibase.checks-settings.conf ===================================================================================== Changesets Validated: in example-changelog.xml ID: 1; Author: your.name ID: 2; Author: your.name ID: 3; Author:" }, { "data": "Checks run against each changeset: Changesets Must Have a Comment Assigned (Short names: ChangesetCommentCheck) Changesets Must Have a Context Assigned (Short names: ChangesetContextCheck) Changesets Must Have a Label Assigned (Short names: ChangesetLabelCheck) Check Table Column Count (Short names: TableColumnLimit) One Change Per Changeset (Short names: OneChangePerChangeset) Require primary key when creating table (Short names: PrimaryKeyOnCreateTable) Rollback Required for Changeset (Short names: RollbackRequired) Warn on Detection of 'SELECT *' (Short names: SqlSelectStarWarn) Warn on Detection of 'USE DATABASE' statements (Short names: WarnOnUseDatabase) Warn on Use of User Defined ChangeTypes (Short names: DetectChangeType) Warn when 'DROP COLUMN' detected (Short names: ChangeDropColumnWarn) Warn when 'DROP TABLE' detected (Short names: ChangeDropTableWarn) Warn when 'MODIFY <column>' detected (Short names: ModifyDataTypeWarn) Changelogs Checks Skipped Due to unsupported changeset type for this check: Warn on Detection of 'GRANT' Statements (Short names: SqlGrantWarn) skipped for: 1:your.name, 2:your.name, 3:other.dev Warn on Detection of 'REVOKE' Statements (Short names: SqlRevokeWarn) skipped for: 1:your.name, 2:your.name, 3:other.dev Warn on Detection of grant that contains 'WITH ADMIN OPTION' (Short names: SqlGrantAdminWarn) skipped for: 1:your.name, 2:your.name, 3:other.dev Warn on Detection of grant that contains 'WITH GRANT OPTION' (Short names: SqlGrantOptionWarn) skipped for: 1:your.name, 2:your.name, 3:other.dev Warn when 'TRUNCATE TABLE' detected (Short names: ChangeTruncateTableWarn) skipped for: 1:your.name, 2:your.name, 3:other.dev INFO: Customize this output with the 'checks-output' property. See list of options with 'liquibase checks run --help' or https://docs.liquibase.com/quality-checks INFO: The return code of all SQL Parser failures can be customized by setting the --sql-parser-fail-severity/LIQUIBASECOMMANDCHECKSRUNSQLPARSERFAIL_SEVERITY property, including setting it to '0' to prevent job interruptions. Learn more at https://docs.liquibase.com/quality-checks Liquibase command 'checks run' was executed successfully. * Executing 'liquibase' update Running Changeset: example-changelog.xml::1::your.name Running Changeset: example-changelog.xml::2::your.name Running Changeset: example-changelog.xml::3::other.dev UPDATE SUMMARY Run: 3 Previously run: 0 Filtered out: 0 Failed deployment: 0 Total change sets: 3 Liquibase: Update has been successful. Rows affected: 3 Liquibase command 'update' was executed successfully. * Executing 'liquibase' checks run Executing Quality Checks against database jdbc:h2:tcp://localhost:9090/mem:dev Executing all database checks because a valid license key was found! INFO This command might not yet capture Liquibase Pro additional object types on h2 Checks-settings File: liquibase.checks-settings.conf ===================================================================================== Database objects Validated: Catalog : 1 Column : 11 Index : 2 PrimaryKey : 2 Schema : 1 Table : 2 To increase details set the --verbose property Checks run against database jdbc:h2:tcp://localhost:9090/mem:dev: Check Table Column Count (Short names: TableColumnLimit) Table must have an index (Short names: CheckTablesForIndex) INFO: Customize this output with the 'checks-output' property. See list of options with 'liquibase checks run --help' or https://docs.liquibase.com/quality-checks INFO: The return code of all SQL Parser failures can be customized by setting the --sql-parser-fail-severity/LIQUIBASECOMMANDCHECKSRUNSQLPARSERFAIL_SEVERITY property, including setting it to '0' to prevent job interruptions. Learn more at https://docs.liquibase.com/quality-checks Liquibase command 'checks run' was executed successfully. * Executing Stage: endStage * Executing 'liquibase' history Liquibase History for jdbc:h2:tcp://localhost:9090/mem:dev +++--++--+--+ | Deployment ID | Update Date | Changelog Path | Changeset Author | Changeset ID | Tag | +++--++--+--+ | 9150144810 | 2/28/24, 2:55 PM | example-changelog.xml | your.name | 1 | | +++--++--+--+ | 9150144810 | 2/28/24, 2:55 PM | example-changelog.xml | your.name | 2 | | +++--++--+--+ | 9150144810 | 2/28/24, 2:55 PM | example-changelog.xml | other.dev | 3 | | +++--++--+--+ Liquibase command 'history' was executed successfully. Liquibase command 'flow' was executed successfully.``` 2024 Liquibase Inc. All Rights Reserved. Liquibase is a registered trademark of Liquibase Inc. (737) 402-7187" } ]
{ "category": "App Definition and Development", "file_name": "liquibase-sql.html.md", "project_name": "Liquibase", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "DATABASECHANGELOGHISTORY table on BigQuery, upgraded Flow Variables, include and includeAll in Formatted SQL, and the --pro-strict parameter. DATABASECHANGELOGHISTORY table, Quality Checks Chains, and the Rollback Report Flow Conditionals, simpler PatternAFollowedByPatternB, and Checks Report See all release notes 2024 Liquibase Inc. All Rights Reserved. Liquibase is a registered trademark of Liquibase Inc. (737) 402-7187" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Mergify", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Learn how Mergify integrates with GitHub Actions. Learn how Mergify integrates with CircleCI. Learn how Mergify integrates with Jenkins. How to automate your BuildKite workflow using Mergify. How to automate your TeamCity workflow using Mergify. How to automate your GitLab CI workflow using Mergify. Monitor your merge queues with Mergify Datadog integration. Learn how you can use Slack and Mergify together. Learn how you can use Graphite and Mergify together. How to leverage your Rush monorepo with Mergify. How to automate your dependencies update using Mergify How to automate your dependencies update using Mergify Streamline your dependency management." } ]
{ "category": "App Definition and Development", "file_name": "blue-green-deployments.md", "project_name": "Octopus Deploy", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Blue-green deployments are a pattern whereby we reduce downtime during production deployments by having two separate production environments (blue and green). One of the challenges with automating deployment is the cut-over itself, taking software from the final stage of testing to live production. You usually need to do this quickly in order to minimize downtime. The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, lets say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle. In a blue-green deployment model, the production environment changes with each release: As well as reducing downtime, Blue-Green can be a powerful way to use extra hardware compared to having a dedicated staging environment: Please let us know if you have any feedback about this page. Send feedback Page updated on Sunday, January 1, 2023 Copyright 2024 Octopus Deploy" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Octopus Deploy", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Best-of-breed Continuous Delivery Octopus Deploy is a sophisticated, best-of-breed continuous delivery (CD) platform for modern software teams. Octopus offers powerful release orchestration, deployment automation, and runbook automation, while handling the scale, complexity and governance expectations of even the largest organizations with the most complex deployment challenges. Octopus has a modern, friendly user experience that makes it easy for your teams to self-service application releases, and democratizes how you deliver software. Octopus also has a comprehensive API - anything the UI can do, the API can do too. Octopus takes over where your CI server ends, modelling the entire release orchestration process of software. This includes: You can use Octopus to deploy anything, anywhere. Whether its Kubernetes, Linux or Windows virtual machines, Amazon Web Services, Azure, or Google Cloud, so long as Octopus can speak to it via our Tentacle agent, SSH, command line, or a web service, Octopus can deploy to it. Advanced role-based access control ensures only the right people can deploy to production, all changes are audited, and teams can get their work done without proliferation of admin access to cloud accounts or privileged systems. Octopus can be self-managed and hosted in your own infrastructure or private cloud, or we can host it for you in Octopus Cloud. Octopus models deployments in advanced ways that allows you to tame complexity at scale. For example, if deploying to production means a coordinated rollout of applications and dependencies across geographically distributed clusters in the cloud on behalf of thousands of end-customers, each with their own instances of your application, or pushing releases out to edge networks or servers running in physical retail stores, hospitals or hotels, Octopus enables you to model this in ways no other CD tool can using tenanted deployments. Built by experienced DevOps practitioners over a decade, and battle tested by thousands of organizations ranging from small teams to large Fortune 100 organizations with tens of thousands of engineers, projects and deployment targets, Octopus embodies a set of principles about what weve learned in doing CD well: On most teams, continuous integration (CI) or build automation is a solved problem. Great options like GitHub Actions, GitLab CI, Jenkins, TeamCity, BuildKite, and Azure DevOps exist and are widely used by teams today. These systems are all designed in a similar way - they monitor source code repositories for changes, compile code, run unit tests, and give software developers fast feedback on their code" }, { "data": "For most of these systems, the CD functionality often just means they provide some ability to call a deployment script that you provide - the deployment script is up to you. They might also provide some way to model simple dev/test/production promotion workflows by calling a different deployment script for each environment. When a software team is starting out, this put your deployment script here approach to CD can work - the deployment script starts simple. Over time as applications evolve, complexity grows. That deployment script becomes hundreds and thousands of lines of specialized scripts and YAML that at best only a couple of people on the team understand or are comfortable changing. Multiply that over many teams and applications at the company and DIY shadow CD is born. Teams of engineers become dedicated to building custom CD glue to bridge the gap between put your deployment script here and the realities of the complexity of delivering software at scale. Decoupling the CI platform from the CD platform allows teams to bring their favorite CI tool - and most organizations have more than one - while we focus on giving you the most powerful best-of-breed CD capabilities. Octopus integrates with popular CI tools like GitHub Actions, Jenkins or TeamCity, letting them do what they do best - the CI part of the feedback loop. Octopus then takes over artifact-forward, and handles the release and deployment aspects of CD in advanced ways that no CI/CD tool can. We promise that when using Octopus, compared to a generic CI/CD tool, youll spend 1/10th the amount of time building DIY shadow CD glue, and more time shipping new features and delivering valuable software to production. Octopus is easy to get started with - you can go from zero to fully automating your deployments in minutes. For a tour of the key Octopus concepts, begin with our Getting Started guide. If you deliver software to Kubernetes, our Kubernetes overview is a good starting point. If you enjoy learning via video, the Octopus 101 recording below also serves as a great introduction to the key Octopus concepts in the context of modern deployments. Watch the Octopus 101 We designed Octopus to be easy to learn, but CD is an inherently complex problem domain. Our 270+ people all over the globe have deep experience in all things CD, and were happy to help even when the problem is outside of Octopus. If you get stuck or just need advice, were here to help. Happy deployments! Please let us know if you have any feedback about this page. Send feedback Page updated on Monday, March 18, 2024 Copyright 2024 Octopus Deploy" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "OpenKruise", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Welcome to OpenKruise! OpenKruise is an extended component suite for Kubernetes, which mainly focuses on automated management of large-scale applications, such as deployment, upgrade, ops and availability protection. Most features provided by OpenKruise are built primarily based on CRD extensions. They can work in pure Kubernetes clusters without any other dependencies. Advanced Workloads OpenKruise contains a set of advanced workloads, such as CloneSet, Advanced StatefulSet, Advanced DaemonSet, BroadcastJob. They all support not only the basic features which are similar to the original Workloads in Kubernetes, but also more advanced abilities like in-place update, configurable scale/upgrade strategies, parallel operations. In-place Update is a new methodology to update container images and even environments. It only restarts the specific container with the new image and the Pod will not be recreated, which leads to much faster update process and much less side effects on other sub-systems such as scheduler, CNI or CSI. Bypass Application Management OpenKruise provides several bypass ways to manage sidecar container, multi-domain deployment for applications, which means you can manage these things without modifying the Workloads of applications. For example, SidecarSet can help you inject sidecar containers into all matching Pods during creation and even update them in-place with no effect on other containers in Pod. WorkloadSpread constrains the spread of stateless workload, which empowers single workload the abilities for multi-domain and elastic deployment. High-availability Protection OpenKruise works hard on protecting high-availability for applications. Now it can prevent your Kubernetes resources from the cascading deletion mechanism, including CRD, Namespace and almost all kinds of Workloads. In voluntary disruption scenarios, PodUnavailableBudget can achieve the effect of preventing application disruption or SLA degradation, which is not only compatible with Kubernetes PDB protection for Eviction API, but also able to support the protection ability of above scenarios. High-level Operation Features OpenKruise also provides high-level operation features to help you manage your applications better. You can use ImagePullJob to download any images on any nodes you want. Or you can even requires one or more containers in an running Pod to be restarted. Briefly speaking, OpenKruise plays a subsidiary role of Kubernetes. Kubernetes itself has already provides some features for application deployment and management, such as some basic Workloads. But it is far from enough to deploy and manage lots of applications in large-scale production clusters. OpenKruise can be easily installed in any Kubernetes clusters. It makes up for defects of Kubernetes, including but not limited to application deployment, upgrade, protection and operations. OpenKruise is not a PaaS and it will not provide any abilities of PaaS. It is a standard extended suite for Kubernetes, currently contains two components named kruise-manager and kruise-daemon. PaaS can use the features provided by OpenKruise to make applications deployment and management better. Here are some recommended next steps:" } ]
{ "category": "App Definition and Development", "file_name": "installation.md", "project_name": "OpenKruise", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Since v1.0.0 (alpha/beta), OpenKruise requires Kubernetes version >= 1.16. Since v1.6.0 (alpha/beta), OpenKruise requires Kubernetes version >= 1.18. However it's still possible to use OpenKruise with Kubernetes versions 1.16 and 1.17 as long as KruiseDaemon is not enabled(install/upgrade kruise charts with featureGates=\"KruiseDaemon=false\") Since v1.6.0 (alpha/beta), KruiseDaemon will no longer support v1alpha2 CRI runtimes. However, it is still possible to use OpenKruise on Kubernetes clusters with nodes that only support v1alpha2 CRI, as long as KruiseDaemon is not enabled (install/upgrade Kruise charts with featureGates=\"KruiseDaemon=false\"). Kruise can be simply installed by helm v3.5+, which is a simple command-line tool and you can get it from here. ``` Note: Changelog. ``` Note that: If you have problem with connecting to https://openkruise.github.io/charts/ in production, you might need to download the chart from here manually and install or upgrade with it. ``` $ helm install/upgrade kruise /PATH/TO/CHART``` Note that installing this chart directly means it will use the default template values for Kruise. You may have to set your specific configurations if it is deployed into a production cluster, or you want to configure feature-gates. The following table lists the configurable parameters of the chart and their default values. | Parameter | Description | Default | |:-|:-|:--| | featureGates | Feature gates for Kruise, empty string means all by default | nan | | installation.namespace | namespace for kruise installation | kruise-system | | installation.createNamespace | Whether to create the installation.namespace | true | | manager.log.level | Log level that kruise-manager printed | 4 | | manager.replicas | Replicas of kruise-controller-manager deployment | 2 | | manager.image.repository | Repository for kruise-manager image | openkruise/kruise-manager | | manager.image.tag | Tag for kruise-manager image | v1.6.3 | | manager.resources.limits.cpu | CPU resource limit of kruise-manager container | 200m | | manager.resources.limits.memory | Memory resource limit of kruise-manager container | 512Mi | | manager.resources.requests.cpu | CPU resource request of kruise-manager container | 100m | | manager.resources.requests.memory | Memory resource request of kruise-manager container | 256Mi | | manager.metrics.port | Port of metrics served | 8080 | | manager.webhook.port | Port of webhook served | 9443 | | manager.nodeAffinity | Node affinity policy for kruise-manager pod | {} | | manager.nodeSelector | Node labels for kruise-manager pod | {} | | manager.tolerations | Tolerations for kruise-manager pod | [] | | daemon.log.level | Log level that kruise-daemon printed | 4 | | daemon.port | Port of metrics and healthz that kruise-daemon served | 10221 | | daemon.resources.limits.cpu | CPU resource limit of kruise-daemon container | 50m | | daemon.resources.limits.memory | Memory resource limit of kruise-daemon container | 128Mi | | daemon.resources.requests.cpu | CPU resource request of kruise-daemon container | 0 | | daemon.resources.requests.memory | Memory resource request of kruise-daemon container | 0 | | daemon.affinity | Affinity policy for kruise-daemon pod | {} | |" }, { "data": "| Location of the container manager control socket | /var/run | | daemon.socketFile | Specify the socket file name in socketLocation (if you are not using containerd/docker/pouch/cri-o) | nan | | webhookConfiguration.failurePolicy.pods | The failurePolicy for pods in mutating webhook configuration | Ignore | | webhookConfiguration.timeoutSeconds | The timeoutSeconds for all webhook configuration | 30 | | crds.managed | Kruise will not install CRDs with chart if this is false | true | | manager.resyncPeriod | Resync period of informer kruise-manager, defaults no resync | 0 | | manager.hostNetwork | Whether kruise-manager pod should run with hostnetwork | false | | imagePullSecrets | The list of image pull secrets for kruise image | false | | enableKubeCacheMutationDetector | Whether to enable KUBECACHEMUTATION_DETECTOR | false | Specify each parameter using the --set key=value[,key=value] argument to helm install or helm upgrade. Feature-gate controls some influential features in Kruise: | Name | Description | Default | Effect (if closed) | |:|:--|:-|:| | PodWebhook | Whether to open a webhook for Pod create | True | SidecarSet/KruisePodReadinessGate disabled | | KruiseDaemon | Whether to deploy kruise-daemon DaemonSet | True | ImagePulling/ContainerRecreateRequest disabled | | DaemonWatchingPod | Should each kruise-daemon watch pods on the same node | True | For in-place update with same imageID or env from labels/annotations | | CloneSetShortHash | Enables CloneSet controller only set revision hash name to pod label | False | CloneSet name can not be longer than 54 characters | | KruisePodReadinessGate | Enables Kruise webhook to inject 'KruisePodReady' readiness-gate to all Pods during creation | False | The readiness-gate will only be injected to Pods created by Kruise workloads | | PreDownloadImageForInPlaceUpdate | Enables CloneSet controller to create ImagePullJobs to pre-download images for in-place update | False | No image pre-download for in-place update | | CloneSetPartitionRollback | Enables CloneSet controller to rollback Pods to currentRevision when number of updateRevision pods is bigger than (replicas - partition) | False | CloneSet will only update Pods to updateRevision | | ResourcesDeletionProtection | Enables protection for resources deletion | True | No protection for resources deletion | | TemplateNoDefaults | Whether to disable defaults injection for pod/pvc template in workloads | False | Should not close this feature if it has open | | PodUnavailableBudgetDeleteGate | Enables PodUnavailableBudget for pod deletion, eviction | True | No protection for pod deletion, eviction | | PodUnavailableBudgetUpdateGate | Enables PodUnavailableBudget for" }, { "data": "update | False | No protection for in-place update | | WorkloadSpread | Enables WorkloadSpread to manage multi-domain and elastic deploy | True | WorkloadSpread disabled | | InPlaceUpdateEnvFromMetadata | Enables Kruise to in-place update a container in Pod when its env from labels/annotations changed and pod is in-place updating | True | Only container image can be in-place update | | StatefulSetAutoDeletePVC | Enables policies controlling deletion of PVCs created by a StatefulSet | True | No deletion of PVCs by StatefulSet | | PreDownloadImageForDaemonSetUpdate | Enables DaemonSet controller to create ImagePullJobs to pre-download images for in-place update | False | No image pre-download for in-place update | | PodProbeMarkerGate | Whether to turn on PodProbeMarker ability | True | PodProbeMarker disabled | | SidecarSetPatchPodMetadataDefaultsAllowed | Allow SidecarSet patch any annotations to Pod Object | False | Annotations are not allowed to patch randomly and need to be configured via SidecarSetPatchPodMetadataWhiteList | | SidecarTerminator | SidecarTerminator enables SidecarTerminator to stop sidecar containers when all main containers exited | False | SidecarTerminator disabled | | CloneSetEventHandlerOptimization | CloneSetEventHandlerOptimization enable optimization for cloneset-controller to reduce the queuing frequency cased by pod update | False | optimization for cloneset-controller to reduce the queuing frequency cased by pod update disabled | | ImagePullJobGate | Enables ImagePullJob to pre-download images | False | ImagePullJob disabled | | ResourceDistributionGate | Enables ResourceDistribution to distribute configmaps or secret resources | False | ResourceDistribution disabled | | DeletionProtectionForCRDCascadingGate | Enables DeletionProtection for crd cascading deletion | False | DeletionProtection for crd cascading deletion disabled | If you want to configure the feature-gate, just set the parameter when install or upgrade. Such as: ``` $ helm install kruise https://... --set featureGates=\"ResourcesDeletionProtection=true\\,PreDownloadImageForInPlaceUpdate=true\"``` If you want to enable all feature-gates, set the parameter as featureGates=AllAlpha=true. If you are in China and have problem to pull image from official DockerHub, you can use the registry hosted on Alibaba Cloud: ``` $ helm install kruise https://... --set manager.image.repository=openkruise-registry.cn-shanghai.cr.aliyuncs.com/openkruise/kruise-manager``` Usually K3s has the different runtime path from the default /var/run. So you have to set daemon.socketLocation to the real runtime socket path on your K3s node (e.g. /run/k3s or /var/run/k3s/). When using a custom CNI (such as Weave or Calico) on EKS, the webhook cannot be reached by default. This happens because the control plane cannot be configured to run on a custom CNI on EKS, so the CNIs differ between control plane and worker nodes. To address this, the webhook can be run in the host network so it can be reached, by setting --set manager.hostNetwork=true when use helm install or upgrade. Note that this will lead to all resources created by Kruise, including webhook configurations, services, namespace, CRDs, CR instances and Pods managed by Kruise controller, to be deleted! Please do this ONLY when you fully understand the consequence. To uninstall kruise if it is installed with helm charts: ``` $ helm uninstall kruiserelease \"kruise\" uninstalled``` kruise-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. It is not focused on the health of the individual OpenKruise components, but rather on the health of the various objects inside, such as clonesets, advanced statefulsets and sidecarsets. ```" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "OpsMx", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "In this documentation, we will learn about the different products of OpsMx which helps to make your product release process smooth and hassle free. In the upcoming sections, we will learn about the installation process and also how to use the different features of the products in detail. The products are: OpsMx Intelligent Software Delivery Platform - Spinnaker Orchestration Module - OpsMx Enterprise for Spinnaker Data and Intelligence Module - Autopilot OpsMx Intelligent Software Delivery Platform - Argo OpsMx Enterprise for Argo (OEA) OpsMX Secure Software Delivery - SSD Last updated 2 months ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Ortelius", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Ortelius uses the Command Line Interface for recording what occurs at both the build and deploy steps of your DevOps Pipeline. By recording the deployment activity Ortelius can track the location of each Component Version and Application Version running across all of your Environments. The benefit of allowing Ortelius to track your deployments is to create a unified high-level dashboard of where you are experiencing drift. Because each Component may be deployed with a different deployment process, the data that displays this information is often stored in different tools. Ortelius serves as a centralized location of deployment intelligence and tracks your complete inventory in a single dashboard. The Ortelius Command Line Interface performs the action of monitoring the deployments executed by your pipeline. Install the Ortelius CLI to begin recording your deployments. Install the Ortelius CLI where your CI/CD server is running. Refer to the Ortelius GitHub CLI Documentation for installation instructions and usage. Ortelius uses the concepts of Environments and Endpoints to track where a Component has been deployed. The Command Line Interface will create these objects, but it is useful to understand how they are used. Environments represent where you execute your Application and Components such as a Kubernetes Cluster for Dev, Test or Production. An Environment could be a virtual cloud or physical datacenter. Applications run in many Environments to support your Pipeline states. Environments are associated to a Domain. You can assign Environments to any level of Domain including the Global Domain. However, Environments are most commonly associated to a Project Domain used for Applications. A Project Domain is used to manage an Application and may be defined to include Life Cycle Subdomains for managing your Applications progression from development through release. The Environment menu is on the left of the main panel. Select the Environment menu, to view a list of all Environments to which you have access. The Search bar, represented by a funnel icon, allows you to reorder Environments based on Name or Domain. The Environments List View has the following Tabs. | Tab | Description | |:--|:--| | Refresh | Refreshes the browser. | | Add | Allows you to Add a new Environment. | | Delete | Deletes the selected item. | | Reports | Success or Failed Report: This report shows an ongoing list of all deployments to all Environments, regardless of Domain or Application with success or fail status. This report can be sorted based on the column for easy viewing. It can also be exported. | Double click on an item in the list to see the Dashboard view. The Dashboard view displays all information related to a specific Environment. The Dashboard view has one additional tab option - Calendars. Below are the Details for an Environment. | Field | Description | |:|:| | Full Domain | The fully qualified name of the Domain, including all parent Domains. | | Name | The name of the Environment. Note: Duplicate Names are" }, { "data": "It is recommended that Environments be named in a specific manner, such as DevEnv-HipsterStore. | | Owner Type | User or Group | | Owner | The owner defaults to the User or Group who created it. | | Summary | A short text description of the Environment. | | Created | Auto generated date and time the Environment was created. | | Modified | Auto generated date and time the Environment was last modified. | The Access Section allows Users within designated Groups to update the Environment. To add a Group to one of the access lists, drag and drop the Group from the Available Groups list onto desired access list. All Users who belong to a Group that appear in one of the Access lists will be granted access to the Environment in the following ways: | Access | Description | |:|:-| | View | Allows the User to see the Environment. If the User does not belong to a Group in the View Access list, the Environment will not appear in the List View. | | Change | Allows the User to change the Environments characteristics i.e. Name, Summary, etc. | | Deploy | Allows Users to deploy Applications into the selected Environment. | The Audit Trail displays audit entries for any changes or deployments that impact this object. It includes all changes in the object including User date and time, and deployments with unique numbers. For deployment audits, select a deployment number to see the details including: | Access | Description | |:--|:| | Log | The output of the deployment. | | Files | Any files or objects deployed. | | Step Duration | Deployment Steps with time required to execute. | | Feedback Loop | Shows what was updated starting from Component. | You can also Subscribe or Comment to an Audit Entry. Subscribe: Allows you to receive information about the selected deployment. Comment: Click on Comment to add information. There is a field above the list labeled Say something about this Application that can have comments placed into it, and files can be attached to the comment as well. Entering text into this field activates the Add Message button. Click to save the comment as a line in the list. Add Files to Comments: Click on the paperclip icon to add a file to the message. Once done, click on the Add Message button. These attachments can later be retrieved by clicking on the paperclip icon which then displays the name of the file within a list. Choose the file to download it into the your default Download directory on your local computer. Key Value Configurations are Value Pairs for managing associative arrays assigned to the Object. Key Value Pairs can be assigned at multiple levels, from the Global Domain down to an individual Component and have a scope. Lower level Objects can override a higher level" }, { "data": "Below is the order in which Key Value Pairs can be overridden: | Object | Description | |:|:-| | Global | Contains all Environment variables and any additional Key Value Pairs set by the user when running that task. | | Environment | Overrides any Global Key Value Pairs during a deployment. | | Application | Overrides the Environment Key Value Pairsduring a deployment. | | Endpoint | Overrides the Application Key Value Pairs during a deployment. | | Component | Overrides the Application Key Value Pairs during a deployed. | Key Value Pairs can be given any Name and a Value. Use +Add to add Key Value Pairs to the table. Use Save to confirm. Use the checkbox to Delete or Edit a Key Value Pair. Note: You will need to have pre-defined your Endpoints. See the Define Your Endpoints chapter for more information. Environments are a collection of Endpoints. Use this section to assign the Endpoints that will make up this Environment. Use +Add to create a new row in the Endpoints table. Use Save to commit the row. Select the row and use Edit or Delete to update or remove an Endpoint. When you add a new Endpoint the Hostname will be displayed. The Hostname is the actual network name or IP address. It is assigned when the Endpoint is defined, but is not a required field. If it is defined, it will be displayed in the row. This section shows the success/failure rate and time required for the last 10 deployments to this Environment. View all the Application Base Versions assigned to this Environment. This is read only. Applications Base Versions are associated to Environments when created using the Application Dashboard. This map shows you all of the current Component Versions, with Application Versions, that have been deployed to this Environment. Ortelius Calendar only shows you a history of what has already been deployed. An Endpoint is an object representing a container deployment host, virtual image, or physical server in an enterprises data center. An Ortelius Environment is a collection of Endpoints to which your Application will be deployed. Endpoints can be the location where you will run your Helm Chart for a Kubernetes deployment, a database server, cloud images, etc. There is a many-to-many relationship between Environments and Endpoints, so that an Endpoint can be assigned to more than one Environment, and an Environment can contain many Endpoints. The Endpoint menu is on the left of the main panel. Select the Endpoint menu to view a list of all Endpoints to which you have access. Or use the Search bar, represented by a funnel icon, to reorder Endpoints based on Name or Domain. The Endpoints List View has the following Tabs. | Tab | Description | |:--|:-| | Refresh | Refreshes the browser. | | Add | Allows you to Add a new Endpoint. | | Delete | Deletes the selected item. | Double click on an item in the list to see the Dashboard. The Dashboard view displays all information related to a specific" }, { "data": "| Field | Description | |:-|:| | Full Domain | The fully qualified name of the Domain to which the Endpoint is defined. | | Name | The name of the Endpoint object. For managing Kubernetes clusters, you should name your Endpoint to match the cluster name that the Endpoint is deploying to. This will allow Ortelius to track what has been deployed to each cluster. | | Owner Type | Group or User | | Owner | The owner defaults to the User or Group who created it. | | Summary | A short text description of the Endpoint. | | Created | The date and time the Endpoint was created. | | Modified | The date and time the Endpoint was last modified. | | Endpoint Operating System Type | The platform type of the physical or virtual server that the Endpoints resides on, the list currently includes Unix, Windows, Tandem, Stratus, and OpenVMS. For containers you should select Unix. | | Endpoint Types | Used to indicate what types of Components will be deployed to this Endpoint. Used to route specific types of Components to the matching EndPoint across Environments. | | Hostname | The unique name of a server that is used to identify it on the network. | | Protocol | The protocol used to communicate with the Endpoint. Options are ssh and winrm. | | ssh Port Number | The ssh Port used to connect to the Endpoint if the selected Protocol is ssh. | | Base Directory | If you would like to force all deployments to occur in a specific high level directory, enter it into this field. The Endpoint Base Directory will override the Component Base Directory. For more information see Formatting Directories on the order of how the deployment directory is formatted. | | Test Connection Result | The following fields display the result of the last Test Connection executed, performed by using the Test Connection option from the Endpoint Dashboard.Name Resolution - Checks to see if the DNS name can be resolved. Returns OK on success or Failed if not. Ping - Checks to see if the Endpoint responds to ping. Returns OK on success or Failed if not.Base Directory Check -Checks to ensure the Base Directory is available on the EndPoint Ping Time - Time in milliseconds (ms) for the Ping to respond.IPV4 Address - The IP address of the Hostname.Last Checked - Timestamp of when the last Test Connection was performed.Test Results - Success or Failed message for the last Test Connection executed. | Key Value Configurations are Value Pairs for managing associative arrays assigned to the Object. Key Value Pairs can be assigned at multiple levels, from the Global Domain down to an individual Component and have a scope. Lower level Objects can override a higher level Object. Below is the order in which Key Value Pairs can be overridden: | Object | Description | |:|:-| | Global | Contains all Environment variables and any additional Key Value Pairs set by the user when running that" }, { "data": "| | Environment | Overrides any Global Key Value Pairs during a deployment. | | Application | Overrides the Environment Key Value Pairsduring a deployment. | | Endpoint | Overrides the Application Key Value Pairs during a deployment. | | Component | Overrides the Application Key Value Pairs during a deployed. | Key Value Pairs can be given any Name and a Value. Use +Add to add Key Value Pairs to the table. Use Save to confirm. Use the checkbox to Delete or Edit a Key Value Pair. The Access Section allows Users within designated Groups to update the Endpoint. To add a Group to one of the access lists, drag and drop the Group from the Available Groups list onto desired access list. All Users who belong to a Group within one of the Access lists will be granted access to the Endpoint in the following ways: | Access | Description | |:--|:-| | View | Any User in any Group in this list can see the selected EndPoint. | | Change | Any User in any Group in this list can make changes to the Endpoint. | | Available Groups | This list contains all the Groups within the Ortelius installation. Dragging and dropping back and forth between this list and the other two lists allows or prevents access to viewing and changing the selected EndPoint. | The Audit Trail displays audit entries for any changes or deployments that impact this object. It includes any changes in the object including User date and time, and deployments with unique numbers. You can Subscribe to or Comment on an Audit Entry. Subscribe: Allows you to receive information about the selected deployment. Comment: Add information by clicking on the Comment link within a text entry field. There is a field above the list labeled Say something about this Application that can have comments placed into it, and files can be attached to the comment as well. Enter text into this field to activate the Add Message button. Click to save the comment. Add Files to Comments: Click on the paperclip icon to add a file to the message. Once added and you made a comment, click Add Message. Click on the paperclip icon to retrieve these attachments. The icon opens the line in the list to display the name of the file. Choose the file to download it into the your default Download directory on your local computer. The Trends graph shows you your success or failure rates overtime as well at the time required for the last 10 deployments. If an Application deployment is taking longer than previous deployments, this might indicate an issue with your deployment logic. This section provides a list of all current versions of Components that have been installed on the Endpoint with the Deployment Number. The Deployment Number is generated by Ortelius for each unique deployment. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "PipeCD", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This page is a guideline for installing PipeCD into your Kubernetes cluster and deploying a hello world application to that same Kubernetes cluster. Note: Its not required to install the PipeCD control plane to the cluster where your applications are running. Please read this blog post to understand more about PipeCD in real life use cases. Note: If you want to experiment with PipeCD freely or dont have a Kubernetes cluster, we recommend this Tutorial. The official PipeCD client named pipectl can be installed using the following command ``` curl -Lo ./pipectl https://github.com/pipe-cd/pipecd/releases/download/v0.47.2/pipectlv0.47.2{OS}_amd64 ``` Then make the pipectl binary executable ``` chmod +x ./pipectl ``` You can also move the pipectl binary to the $PATH for later use ``` sudo mv ./pipectl /usr/local/bin/pipectl ``` ``` asdf plugin add pipectl && asdf install pipectl latest && asdf global pipectl latest ``` ``` aqua g -i pipe-cd/pipecd/pipectl && aqua i ``` ``` brew install pipe-cd/tap/pipectl ``` We can simply use pipectl quickstart command to start the PipeCD installation process and follow the instruction ``` pipectl quickstart --version v0.47.2 ``` Follow the instruction, the PipeCD control plane will be installed with a default project named quickstart. You can access to the PipeCD console at http://localhost:8080 and pipectl command will open the PipeCD console automatically in your browser. To login, you can use the configured static admin account as below: After successfully logging in, the browser will redirect you to the PipeCD console settings page at the piped settings tab. You will find the +ADD button on the top of this page, click there and insert information to register the deployment runner for PipeCD (called piped). Click on the Save button, and then you can see the piped-id and secret-key. Use the above value to fill the form showing on the terminal you run pipectl quickstart command ``` ... Fill up your registered Piped information: ID: 2bf655c6-d7a8-4b97-8480-43fb0155539e Key: 02s3b0b6bo07kvzr8662tke4i292uo5n8w1x9pn8q9rww5lk0b GitRemoteRepo: https://github.com/{FORKEDGITHUBORG}/examples.git ``` Thats all! Note: The pipectl quickstart command will run continuously to expose your PipeCD console on localhost:8080. If you stop the process, the installed PipeCD components will not be lost, you can access to the PipeCD console anytime using kubectl port-forward command ``` kubectl -n pipecd port-forward svc/pipecd 8080 ``` Above is all that is necessary to set up your own PipeCD (both control plane and agent), lets use the installed one to deploy your first Kubernetes application with PipeCD. Navigate to the Applications page, click on the +ADD button on the top left corner. Go to the ADD FROM SUGGESTIONS tab, then select: You should see a lot of suggested applications. Select the canary application and click the SAVE button to register. After a bit, the first deployment is complete and will automatically sync the application to the state specified in the current Git commit. Lets get started with deployment! All you have to do is to make a PR to update the image tag, scale the replicas, or change the manifests. For instance, open the kubernetes/canary/deployment.yaml under the forked examples' repository, then change the tag from v0.1.0 to v0.2.0. After a short wait, a new deployment will be started to update to v0.2.0. When youre finished experimenting with PipeCD quickstart mode, you can uninstall it using: ``` pipectl quickstart --uninstall ``` To prepare your PipeCD for a production environment, please visit the Installation guideline. For guidelines to use PipeCD to deploy your application in daily usage, please visit the User guide docs. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "App Definition and Development", "file_name": "about.md", "project_name": "Screwdriver", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "A collection of services that facilitate the workflow for continuous delivery pipelines. Screwdriver treats Continuous Delivery as a first-class citizen in your build pipeline. Easily define the path that your code takes from Pull Request to Production. Screwdriver ties directly into your DevOps daily habits. It tests your pull requests, builds your merged commits, and deploys to your environments. Define load tests, canary deployments, and multi-environment deployment pipelines with ease. Define your pipeline in a simple YAML file that lives beside your code. There is no external configuration of your pipeline to deal with, so you can review pipeline changes and roll them out with the rest of your codebase. Screwdriver's architecture uses pluggable components under the hood to allow you to swap out the pieces that make sense for your infrastructure. Swap in Postgres for the Datastore or Jenkins for the Executor. You can even dynamically select an execution engine based on the needs of each pipeline. For example, send golang builds to the kubernetes executor while your iOS builds go to a Jenkins execution farm. All code on this site is licensed under the Yahoo BSD License unless otherwise stated. 2016 Yahoo! Inc. All rights reserved. From here you can search these documents. Enter your search terms below." } ]
{ "category": "App Definition and Development", "file_name": "configuration.md", "project_name": "Screwdriver", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This is an interactive guide for exploring various important properties of the screwdriver.yaml configuration for projects. You can access information about properties by hovering over the property name. ``` shared: environment: NODE_ENV: test settings: email: addresses: [test@email.com, test2@email.com] statuses: [SUCCESS, FAILURE] annotations: beta.screwdriver.cd/my-cluster-annotation: my-data beta.screwdriver.cd/executor: k8s-vm screwdriver.cd/cpu: HIGH screwdriver.cd/ram: LOW parameters: region: \"us-west-1\" az: value: \"zone 1\" description: \"default availability zone\" jobs: main: requires: [~pr, ~commit, ~sd@123:main] sourcePaths: [\"src/app/\", \"screwdriver.yaml\"] image: node:lts steps: init: npm install test: npm test publish: requires: [main] template: node/publish@4.3.1 order: [init, publish, teardown-save-results] steps: publish: npm install teardown-save-results: cp ./results $SDARTIFACTSDIR deploy-west: requires: [] image: node:lts environment: DEPLOY_ENV: west steps: init: npm install deploy: npm deploy deploy-east: requires: [deploy-west] image: node:lts environment: DEPLOY_ENV: east steps: init: npm install deploy: npm deploy finished: requires: [stage@deployment] image: node:lts steps: echo: echo done stages: deployment: requires: [publish] jobs: [deploy-west, deploy-east] description: This stage is utilized for grouping jobs involved in deploying components to multiple regions. ``` A single job name or array of jobs that will trigger the job to run. Jobs defined with \"requires: ~pr\" are started when a pull request is opened, reopened, or modified. Jobs defined with \"requires: ~commit\" are started when a PR is merged or a commit/push is made directly to the defined SD branch; also runs when the Start button is clicked in the UI. Jobs defined with \"requires: ~sd@123:main\" are started by job \"main\" from pipeline \"123\". Jobs defined with \"requires: [deploy-west, deploy-east] are started after \"deploy-west\" and \"deploy-east\" are both done running successfully. \"Note: ~ jobs denote an OR functionality, jobs without a ~ denote join functionality. You can optionally specify source paths that will trigger a job upon modification. In this example, the \"main\" job will only run if changes are made to things under the \"src/app/\" directory or the \"screwdriver.yaml\" file. This feature is only available for Github SCM. Defines a global configuration that applies to all jobs. Shared configurations are merged with each job, but may be overridden by more specific configuration in a specific job. A set of key/value pairs for environment variables that need to be set. Any configuration that is valid for a job configuration is valid in shared, but will be overridden by specific job configurations. Configurable settings for any additional build plugins added to Screwdriver.cd. Annotations is an optional object containing key-value pairs. These can be either pipeline or job-level specifications. Annotation key-value pairs can be completely arbitrary, as in the example, or can modify the execution of the" }, { "data": "Check with your Screwdriver cluster admin to find what annotations are supported to modify your build execution with. Used to designate a non-default executor to run the build. Some available executors are `jenkins`, `k8s-vm` CPU allocated for the VM if using `k8s-vm` executor. `LOW` is configured by default, and indicates 2CPU memory. `HIGH` means 6CPU. RAM allocated for the VM if using `k8s-vm` executor. `LOW` is configured by default, and indicates 2GB memory. `HIGH` means 12GB memory. Email addresses to send notifications to and statuses to send notifications for. A dictionary consists of key value pairs. `key: string` is a shorthand for writing as `key:value` as shown. A series of jobs that define the behavior of your builds. This defines the Docker image used for the builds. This value should be the same as you would use for a \"docker pull\" command. Defines the explicit list of commands that are executed in the build, just as if they were entered on the command line. Environment variables will be passed between steps, within the same job. Step definitions are required for all jobs. Step names cannot start with `sd-`, as those steps are reserved for Screwdriver steps. In essence, Screwdriver runs `/bin/sh` in your terminal then executes all the steps; in rare cases, different terminal/shell setups may have unexpected behavior. A predefined job; generally consists of a Docker image and steps. Can only be used when \"template\" is defined. Step names that should be run in a particular order. Will select steps from Template and Steps, with priority given to Steps defined in the job. User-defined steps that will always run at the end of a build (even if the previous steps fail or the build is aborted). Step name always starts with \"teardown-\". A list of stages designed to group jobs with similar purposes. A list of jobs that belong to a stage. It can consist of one or more jobs. A job in a stage could have an empty trigger, indicating that it should run as the initial job in the stage immediately after the list of jobs triggering the stage has finished. A job may execute after the completion of a stage, especially following the conclusion of the last job within the stage or after the teardown job if it has been explicitly defined. All code on this site is licensed under the Yahoo BSD License unless otherwise stated. 2016 Yahoo! Inc. All rights reserved. From here you can search these documents. Enter your search terms below." } ]
{ "category": "App Definition and Development", "file_name": "contributing.md", "project_name": "Screwdriver", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Thank you for considering contributing! There are many ways you can help. All issues with Screwdriver can be found in the screwdriver repo. To see what were currently working on, you can check out our digital scrum board in the Projects section of the Screwdriver API repo. If youre not sure what repositories you need to change, consult our Where to Contribute doc. For pointers on developing, checkout the Getting Started Developing docs. Please try to: Patches for fixes, features, and improvements are accepted through pull requests. Here are some tips for contributing: Please ask before embarking on a large improvement so youre not disappointed if it does not align with the goals of the project or owner(s). We use semantic-release, which requires commit messages to be in this specific format: <type>(<scope>): <subject> | Keyword | Description | |:-|:--| | Type | feat (feature), fix (bug fix), docs (documentation), style (formatting, missing semi colons, ), refactor, test (when adding missing tests), chore (maintain) | | Scope | anything that specifies the scope of the commit; can be blank, the issue number that your commit pertains to, or * | | Subject | description of the commit | Important: For any breaking changes that require a major version bump, add BREAKING CHANGE: <message> preceded by a space or two newlines in the footer of the commit message. The rest of the commit message is then used for this. Example commit messages: ``` feat(1234): remove graphiteWidth option BREAKING CHANGE: The graphiteWidth option has been removed. The default graphite width of 10mm is always used for performance reasons. ``` All code on this site is licensed under the Yahoo BSD License unless otherwise stated. 2016 Yahoo! Inc. All rights reserved. From here you can search these documents. Enter your search terms below." } ]
{ "category": "App Definition and Development", "file_name": "quickstart.md", "project_name": "Screwdriver", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "This page will cover how to build and deploy a sample app with Screwdriver in minutes. In this example, we are using the SCM provider Github. First, fork and clone a sample repository into your local development environment and cd into the project directory. We will cover the generic quickstart in this example. ``` $ git clone git@github.com:<YOURUSERNAMEHERE>/quickstart-generic.git $ cd quickstart-generic/ ``` For applications that are better suited to Makefiles and small scripts, we recommend referencing the generic screwdriver.yaml. Now that weve setup our app, we can start developing. This app demonstrates how to run a Makefile and bash script (my_script.sh) in your Screwdriver build. The screwdriver.yaml is the only config file you need for using Screwdriver. In it, you will define all your steps needed to successfully develop, build and deploy your application. See the User Guide -> Configuration section for more details. The shared section is where you would define any attributes that all your jobs will inherit. In our example, we state that all our jobs will run in the same Docker container image buildpack-deps. The image is usually defined in the form of reponame. Alternatively, you can define the image as reponame:taglabel, where taglabel is a version. See the Docker documentation for more information. ``` shared: image: buildpack-deps ``` The jobs section is where all the tasks (or steps) that each job will execute is defined. Jobs can be grouped to form a stage to convey the role of the jobs that are performing actions to achieve the same goal. For example, CI jobs can be grouped as integration, while CD jobs as deployment. A stage can contain one or more jobs, however a job is only allowed to be part of one single stage. The requires keyword denotes the order that jobs will run. Requires is a single job name or array of job names. Special keywords like ~pr or ~commit indicate that the job will run after certain Git events occur: The steps section contains a list of commands to execute. Each step takes the form stepname: commandtorun. The stepname is a convenient label to reference it by. The commandtorun is the single command that is executed during this step. Step names cannot start with sd-, as those steps are reserved for Screwdriver steps. Environment variables will be passed between steps, within the same" }, { "data": "In essence, Screwdriver runs /bin/sh in your terminal then executes all the steps; in rare cases, different terminal/shell setups may have unexpected behavior. In our example, our main job executes a simple piece of inline bash code. The first step (export) exports an environment variable, GREETING=\"Hello, world!\". The second step (hello) echoes the environment variable from the first step. The third step uses metadata, a structured key/value storage of relevant information about a build, to set an arbitrary key in the main job and get it in the second_job. We also define another job called secondjob. In this job, we intend on running a different set of commands. The maketarget step calls a Makefile target to perform some set of actions. This is incredibly useful when you need to perform a multi-line command. The runarbitraryscript executes a script. This is an alternative to a Makefile target where you want to run a series of commands related to this step. ``` jobs: main: requires: [~pr, ~commit] steps: export: export GREETING=\"Hello, world!\" hello: echo $GREETING set-metadata: meta set example.coverage 99.95 second_job: requires: [main] # second_job will run after main job is done steps: make_target: make greetings get-metadata: meta get example runarbitraryscript: ./my_script.sh ``` Now that we have a working repository, lets head over to the Screwdriver UI to build and deploy an app. (For more information on Screwdriver YAMLs, see the configuration page.) In order to use Screwdriver, you will need to login to Screwdriver using Github, set up your pipeline, and start a build. Click on the Create icon. (You will be redirected to login if you have not already.) Click Login with SCM Provider. You will be asked to give Screwdriver access to your repositories. Choose appropriately and click Authorize. Enter your repository link into the field. SSH or HTTPS link is fine, with #<YOURBRANCHNAME> immediately after (ex: git@github.com:screwdriver-cd/guide.git#test). If no BRANCH_NAME is provided, it will default to the default branch configured in the SCM. Click Use this repository to confirm and then click Create Pipeline. Now that youve created a pipeline, you should be directed to your new pipeline page. Click the Start button to start your build. All code on this site is licensed under the Yahoo BSD License unless otherwise stated. 2016 Yahoo! Inc. All rights reserved. From here you can search these documents. Enter your search terms below." } ]
{ "category": "App Definition and Development", "file_name": "support.md", "project_name": "Screwdriver", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Screwdriver is a collection of services that facilitate the workflow for Continuous Delivery pipelines. Commit new code User starts a new build by one of the following operations: Notify Screwdriver Signed webhooks notify Screwdrivers API about the change. Trigger execution engine Screwdriver starts a job on the specified execution engine passing the users configuration and git information. Build software Screwdrivers Launcher executes the commands specified in the users configuration after checking out the source code from git inside the desired container. Publish artifacts (optional) User can optionally push produced artifacts to respective artifact repositories (RPMs, Docker images, Node Modules, etc.). Continue pipeline On completion of the job, Screwdrivers Launcher notifies the API and if theres more in the pipeline, the API triggers the next job on the execution engine (GOTO:3). Screwdriver consists of five main components, the first three of which are built/maintained by Screwdriver: REST API RESTful interface for creating, monitoring, and interacting with pipelines. Web UI Human consumable interface for the REST API. Launcher Self-contained tool to clone, setup the environment, and execute the shell commands defined in your job. Execution Engine Pluggable build executor that supports executing commands inside of a container (e.g. Jenkins, Kubernetes, Nomad, and Docker). Datastore Pluggable storage for keeping information about pipelines (e.g. Postgres, MySQL, and Sqlite). Screwdriver application components are running in a Kubernetes environment in AWS. Builds are also launched in same Kubernetes environment under a different namespace. Example environment https://cd.screwdriver.cd/ All code on this site is licensed under the Yahoo BSD License unless otherwise stated. 2016 Yahoo! Inc. All rights reserved. From here you can search these documents. Enter your search terms below." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Spacelift", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Announcing Enhanced VCS Integration Read more here Spacelift is a sophisticated, continuous integration and deployment (CI/CD) platform for infrastructure-as-code, supporting Terraform, OpenTofu, Terragrunt, Pulumi, AWS CloudFormation, AWS CDK, Kubernetes, Ansible, and more. It's designed and implemented by long-time DevOps practitioners based on previous experience with large-scale installations - dozens of teams, hundreds of engineers, and tens of thousands of cloud resources. At the same time, Spacelift is super easy to get started with - you can go from zero to fully managing your cloud resources within less than a minute, with no pre-requisites. It integrates nicely with the large players in the field - notably GitHub and AWS. If you're new to Spacelift, please spend some time browsing through the articles in the same order as they appear in the menu - start with the main concepts and follow with integrations. If you're more advanced, you can navigate directly to the article you need, or use the search feature to find a specific piece of information. If you still have questions, feel free to reach out to us. Yes, we believe it's a good idea. While in an ideal world one CI system would be enough to cover all use cases, we don't live in an ideal world. Regular CI tools can get you started easily, but Terraform has a rather unusual execution model and a highly stateful nature. Also, mind the massive blast radius when things go wrong. We believe Spacelift offers a perfect blend of regular CI's versatility and methodological rigor of a specialized, security-conscious infrastructure tool - enough to give it a shot even if you're currently happy with your infra-as-code CI/CD setup. In the following sections, we'll try to present the main challenges of running Terraform in a general purpose CI system, as well as show how Spacelift addresses those. At the end of the day, it's mostly about two things - collaboration and security. Wait, aren't CIs built for collaboration? Yes, assuming stateless tools and processes. Running stateless builds and tests is what regular CIs are exceptionally good at. But many of us have noticed that deployments are actually trickier to get right. And that's hardly a surprise. They're more stateful, they may depend on what's already running. Terraform and your infrastructure, in general, is an extreme example of a stateful system. It's so stateful that it actually has something called state (see what we just did there?) as one of its core concepts. CIs generally struggle with that. They don't really understand the workflows they run, so they can't for example serialize certain types of jobs. Like terraform apply, which introduces actual changes to your infrastructure. As far as your CI system is concerned, running those in parallel is fair game. But what it does to Terraform is nothing short of a disaster - your state is confused and no longer represents any kind of reality. Untangling this mess can take forever. But you can add manual approval steps Yes, you can. But the whole point of your CI/CD system is to automate your" }, { "data": "First of all, becoming a human semaphore for a software tool isn't the best use of a highly skilled and motivated professional. Also, over-reliance on humans to oversee software processes will inevitably lead to costly mistakes because we, humans, are infinitely more fallible than well-programmed machines. It's ultimately much cheaper to use the right tool for the job than turn yourself into a part of a tool. But you can do state locking! Yup, we hear you. In theory, it's a great feature. In practice, it has its limitations. First, it's a massive pain when working as a team. Your CI won't serialize jobs that can write state, and state locking means that all but one of the parallel jobs will simply fail. It's a safe default, that's for sure, but not a great developer experience. And the more people work on your infrastructure, the more frustrating the process will become. And that's just applying changes. By default, running terraform plan locks the state, too. So you can't really run multiple CI jobs in parallel, even if they're only meant to preview changes, because each of them will attempt to lock the state. Yes, you can work around this by explicitly not locking state in CI jobs that you know won't make any state changes, but at this point, you've already put so much work into creating a pipeline that's fragile at best and requires you to manually synchronize it. And we haven't even discussed security yet. Terraform is used to manage infrastructure, which normally requires credentials. Usually, very powerful credentials. Administrative credentials, sometimes. And these can do a lot of damage. The thing with CIs is that you need to provide those credentials statically, and once you do, there's no way you can control how they're used. And that's what makes CIs powerful - after all, they let you run arbitrary code, normally based on some configuration file that you have checked in with your Terraform code. So, what's exactly stopping a prankster from adding terraform destroy -auto-approve as an extra CI step? Or printing out those credentials and using them to mine their crypto of choice? There are better ways to get fired. ...you'll say and we hear you. Those jobs are audited after all. No, if we were disgruntled employees we'd never do something as stupid. We'd get an SSH session and leak those precious credentials this way. Since it's unlikely you rotate them every day, we'd take our sweet time before using them for our nefarious purposes. Which wouldn't be possible with Spacelift BTW, which generates one-off temporary credentials for major cloud providers. But nobody does that! Yes, you don't hear many of those stories. Most mistakes happen to well-meaning people. But in the world of infrastructure, even the tiniest of mistakes can cause major outages - like that typo we once made in our DNS config. That's why Spacelift adds an extra layer of policy that allows you to control - separately from your infrastructure project! - what code can be executed, what changes can be made, when and by whom. This isn't only useful to protect yourself from the baddies, but allows you to implement an automated code review pipeline." } ]
{ "category": "App Definition and Development", "file_name": "login-policy.md", "project_name": "Spacelift", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Announcing Enhanced VCS Integration Read more here Git push policies are triggered on a per-stack basis to determine the action that should be taken for each individual Stack or Module in response to a Git push or Pull Request notification. There are three possible outcomes: Using this policy it is possible to create a very sophisticated, custom-made setup. We can think of two main - and not mutually exclusive - use cases. The first one would be to ignore changes to certain paths - something you'd find useful both with classic monorepos and repositories containing multiple Terraform projects under different paths. The second one would be to only attempt to apply a subset of changes - for example, only commits tagged in a certain way. Each stack and module points at a particular Git branch called a tracked branch. By default, any push to the tracked branch that changes a file in the project root triggers a tracked Run that can be applied. This logic can be changed entirely by a Git push policy, but the tracked branch is always reported as part of the Stack input to the policy evaluator and can be used as a point of reference. When a push policy does not track a new push, the head commit of the stack/module will not be set to the tracked branch head commit. We can address this by navigating to that stack and pressing the sync button (this syncs the tracked branch head commit with the head commit of the stack/module). Spacelift can currently react to two types of events - push and pull request (also called merge request by GitLab). Push events are the default - even if you don't have a push policy set up, we will respond to those events. Pull request events are supported for some VCS providers and are generally received when you open, synchronize (push a new commit), label, or merge the pull request. There are some valid reasons to use pull request events in addition or indeed instead of push ones. One is that when making decisions based on the paths of affected files, push events are often confusing: But there are more reasons depending on how you want to structure your workflow. Here are a few samples of PR-driven policies from real-life use cases, each reflecting a slightly different way of doing things. First, let's only trigger proposed runs if a PR exists, and allow any push to the tracked branch to trigger a tracked run: | 0 | 1 | |:-|:--| | 1 2 3 4 5 | package spacelift track { input.push.branch == input.stack.branch } propose { not isnull(input.pullrequest) } ignore { not track; not propose } | ``` 1 2 3 4 5``` ``` package spacelift track { input.push.branch == input.stack.branch } propose { not isnull(input.pullrequest) } ignore { not track; not propose } ``` If you want to enforce that tracked runs are always created from PR merges (and not from direct pushes to the tracked branch), you can tweak the above policy accordingly to just ignore all non-PR events: | 0 | 1 | |:|:| | 1 2 3 4 5 6 | package spacelift track { ispr; input.push.branch == input.stack.branch } propose { ispr } ignore { not ispr } ispr { not" }, { "data": "} | ``` 1 2 3 4 5 6``` ``` package spacelift track { is_pr; input.push.branch == input.stack.branch } propose { is_pr } ignore { not is_pr } ispr { not isnull(input.pull_request) } ``` Here's another example where you respond to a particular PR label (\"deploy\") to automatically deploy changes: | 0 | 1 | |:|:--| | 1 2 3 4 5 6 | package spacelift track { ispr; labeled } propose { true } ispr { not isnull(input.pullrequest) } labeled { input.pullrequest.labels[] == \"deploy\" } | ``` 1 2 3 4 5 6``` ``` package spacelift track { is_pr; labeled } propose { true } ispr { not isnull(input.pull_request) } labeled { input.pullrequest.labels[] == \"deploy\" } ``` Info When a run is triggered from a GitHub Pull Request and the Pull Request is mergeable (ie. there are no merge conflicts), we check out the code for something they call the \"potential merge commit\" - a virtual commit that represents the potential result of merging the Pull Request into its base branch. This should provide better quality, less confusing feedback. Let us know if you notice any irregularities. If you're using pull requests in your flow, it is possible that we'll receive duplicate events. For example, if you push to a feature branch and then open a pull request, we first receive a push event, then a separate pull request (opened) event. When you push another commit to that feature branch, we again receive two events - push and pull request (synchronized). When you merge the pull request, we get two more - push and pull request (closed). It is possible that push policies resolve to the same actionable (not ignore) outcome (eg. track or propose). In those cases instead of creating two separate runs, we debounce the events by deduplicating runs created by them on a per-stack basis. The deduplication key consists of the commit SHA and run type. If your policy returns two different actionable outcomes for two different events associated with a given SHA, both runs will be created. In practice, this would be an unusual corner case and a good occasion to revisit your workflow. When events are deduplicated and you're sampling policy evaluations, you may notice that there are two samples for the same SHA, each with different input. You can generally assume that it's the first one that creates a run. The push policy can also be used to have the new run pre-empt any runs that are currently in progress. The input document includes the in_progress key, which contains an array of runs that are currently either still queued or are awaiting human confirmation. You can use it in conjunction with the cancel rule like this: | 0 | 1 | |-:|:--| | 1 | cancel[run.id] { run := input.inprogress[] } | ``` 1``` ``` cancel[run.id] { run := input.inprogress[] } ``` Of course, you can use a more sophisticated approach and only choose to cancel a certain type of run, or runs in a particular state. For example, the rule below will only cancel proposed runs that are currently queued (waiting for the worker): | 0 | 1 | |:-|:--| | 1 2 3 4 5 | cancel[run.id] { run := input.inprogress[] run.type == \"PROPOSED\" run.state == \"QUEUED\" } | ``` 1 2 3 4 5``` ``` cancel[run.id] { run := input.inprogress[] run.type == \"PROPOSED\"" }, { "data": "== \"QUEUED\" } ``` You can also compare branches and cancel proposed runs in queued state pointing to a specific branch using this example policy: | 0 | 1 | |:|:| | 1 2 3 4 5 6 | cancel[run.id] { run := input.inprogress[] run.type == \"PROPOSED\" run.state == \"QUEUED\" run.branch == input.pull_request.head.branch } | ``` 1 2 3 4 5 6``` ``` cancel[run.id] { run := input.inprogress[] run.type == \"PROPOSED\" run.state == \"QUEUED\" run.branch == input.pull_request.head.branch } ``` Please note there are some restrictions on cancelation to be aware of: The track decision sets the new head commit on the affected stack or module. This head commit is what is going to be used when a tracked run is manually triggered, or a task is started on the stack. Usually what you want in this case is to have a new tracked Run, so this is what we do by default. Sometimes, however, you may want to trigger those tracked runs in a specific order or under specific circumstances - either manually or using a trigger policy. So what you want is an option to set the head commit, but not trigger a run. This is what the boolean notrigger rule can do for you. notrigger will only work in conjunction with track decision and will prevent the tracked run from being created. Please note that notrigger does not depend in any way on the track rule - they're entirely independent. Only when interpreting the result of the policy, we will only look at notrigger if track evaluates to true. Here's an example of using the two rules together to always set the new commit on the stack, but not trigger a run - for example, because it's either always triggered manually, through the API, or using a trigger policy: | 0 | 1 | |:|:-| | 1 2 3 | track { input.push.branch == input.stack.branch } propose { not track } notrigger { true } | ``` 1 2 3``` ``` track { input.push.branch == input.stack.branch } propose { not track } notrigger { true } ``` For more information on taking action from comments on Pull Requests, please view the documentation on pull request comments. If you would like to customize messages sent back to your VCS when Spacelift runs are ignored, you can do so using the message function within your Push policy. Please see the example policy below as a reference for this functionality. By default, ignored runs on a stack will return a \"skipped\" status check event, rather than a fail event. If you would like ignored run events to have a failed status check on your VCS, you can do so using the fail function within your Push policy. If a fail result is desired, set this value to true. The following Push policy does not trigger any run within Spacelift. Using this policy, we can ensure that the status check within our VCS (in this case, GitHub) fails and returns the message \"I love" }, { "data": "| 0 | 1 | |:-|:--| | 1 2 | fail { true } message[\"I love bacon\"] { true } | ``` 1 2``` ``` fail { true } message[\"I love bacon\"] { true } ``` As a result of the above policy, users would then see this behavior within their GitHub status check: Info Note that this behavior (customization of the message and failing of the check within the VCS), is only applicable when runs do not take place within Spacelift. Some users prefer to manage their Terraform Module versions using git tags, and would like git tag events to push their module to the Spacelift module registry. Using a fairly simple Push policy, this is supported. To do this, you'll want to make sure of the module_version block within a Push policy attached your module, and then set the version using the tag information from the git push event. For example, the following example Push policy will trigger a tracked run when a tag event is detected. The policy then parses the tag event data and uses that value for the module version, in the below example we remove a git tag prefixed with v as the Terraform Module Registry only supports versions in a numeric X.X.X format. It's important to note that for this policy, you will need to provide a mock, non-existent version for proposed runs. This precaution has been taken to ensure that pull requests do not encounter check failures due to the existence of versions that are already in use. | 0 | 1 | |:--|:--| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | package spacelift moduleversion := version { version := trimprefix(input.push.tag, \"v\") not propose } moduleversion := \"<X.X.X>\" { propose } propose { not isnull(input.pull_request) } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14``` ``` package spacelift module_version := version { version := trim_prefix(input.push.tag, \"v\") not propose } module_version := \"<X.X.X>\" { propose } propose { not isnull(input.pullrequest) } ``` If you wish to add a track rule to your push policy, the below will start a tracked run when the module version is not empty and the push branch is the same as the one the module branch is tracking. | 0 | 1 | |:--|:--| | 1 2 3 4 | track { module_version != \"\" input.push.branch == input.module.branch } | ``` 1 2 3 4``` ``` track { module_version != \"\" input.push.branch == input.module.branch } ``` By default, we don't trigger runs when a forked repository opens a pull request against your repository. This is because of a security concern: if let's say your infrastructure is open source, someone forks it, implements some unwanted junk in there, then opens a pull request for the original repository, it'd run automatically with the prankster's code included. Info The cause is very similar to GitHub Actions where they don't expose repository secrets when forked repositories open pull requests. If you still want to allow it, you can explicitly do it with allow_fork rule. For example, if you trust certain people or organizations: | 0 | 1 | |:-|:--| | 1 2 3 4 5 | propose { true } allowfork { validOwners := {\"johnwayne\", \"microsoft\"} validOwners[input.pullrequest.head_owner] } | ``` 1 2 3 4 5``` ``` propose { true } allow_fork { validOwners := {\"johnwayne\", \"microsoft\"} validOwners[input.pullrequest.headowner] } ``` In the above case, we'll allow a forked repository to run, only if the owner of the forked repository is either johnwayne or" }, { "data": "head_owner field means different things in different VCS providers: In GitHub, headowner is the organization or the person owning the forked repository. It's typically in the URL: https://github.com/<headowner>/<forked_repository> In GitLab, it is the group of the repository which is typically the URL of the repository: https://gitlab.com/<headowner>/<forkedrepository> Azure DevOps is a special case because they don't provide us the friendly name of the headowner. In this case, we need to refer to headowner as the ID of the forked repository's project which is a UUID. One way to figure out this UUID is to open https://dev.azure.com/<organization>/_apis/projects website which lists all projects with their unique IDs. You don't need any special access to this API, you can just simply open it in your browser. Official documentation of the API. In Bitbucket Cloud, headowner means workspace. It's in the URL of the repository: https://www.bitbucket.org/<workspace>/<forkedrepository>. In Bitbucket Datacenter/Server, it is the project key of the repository. The project key is not the display name of the project, but the abbreviation in all caps. The pull_request property on the input to a push policy contains the following fields: The following example shows a push policy that will automatically deploy a PR's changes once it has been approved, any required checks have completed, and the PR has a deploy label added to it: | 0 | 1 | |:--|:| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | package spacelift # Trigger a tracked run if a change is pushed to the stack branch track { affected input.push.branch == input.stack.branch } # Trigger a tracked run if a PR is approved, mergeable, undiverged and has a deploy label track { ispr isclean isapproved ismarkedfordeploy } # Trigger a proposed run if a PR is opened propose { ispr } ispr { not isnull(input.pullrequest) } isclean { input.pullrequest.mergeable input.pullrequest.undiverged } isapproved { input.pullrequest.approved } ismarkedfordeploy { input.pullrequest.labels[] == \"deploy\" } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37``` ``` package spacelift track { affected input.push.branch == input.stack.branch } track { is_pr is_clean is_approved ismarkedfor_deploy } propose { is_pr } is_pr { not isnull(input.pullrequest) } is_clean { input.pull_request.mergeable input.pull_request.undiverged } is_approved { input.pull_request.approved } ismarkedfor_deploy { input.pullrequest.labels[] == \"deploy\" } ``` Each source control provider has slightly different features, and because of this the exact definition of approved and mergeable varies slightly between providers. The following sections explain the differences. Info Please note that we are unable to calculate divergence across forks in Azure DevOps, so the undiverged property will always be false for PRs created from" }, { "data": "As input, Git push policy receives the following document: | 0 | 1 | |:--|:--| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | { \"inprogress\": [{ \"basedonlocalworkspace\": \"boolean - whether the run stems from a local preview\", \"branch\": \"string - the branch this run is based on\", \"createdat\": \"number - creation Unix timestamp in nanoseconds\", \"triggeredby\": \"string or null - user or trigger policy who triggered the run, if applicable\", \"type\": \"string - run type: proposed, tracked, task, etc.\", \"state\": \"string - run state: queued, unconfirmed, etc.\", \"updatedat\": \"number - last update Unix timestamp in nanoseconds\", \"userprovidedmetadata\": [\"string - blobs of metadata provided using spacectl or the API when interacting with this run\"] }], \"pullrequest\": { \"action\": \"string - opened, reopened, closed, merged, edited, labeled, synchronize, unlabeled\", \"actioninitiator\": \"string\", \"approved\": \"boolean - indicates whether the PR has been approved\", \"author\": \"string\", \"base\": { \"affectedfiles\": [\"string\"], \"author\": \"string\", \"branch\": \"string\", \"createdat\": \"number (timestamp in nanoseconds)\", \"message\": \"string\", \"tag\": \"string\" }, \"closed\": \"boolean\", \"diff\": [\"string - list of files changed between base and head commit\"], \"draft\": \"boolean - indicates whether the PR is marked as draft\", \"head\": { \"affectedfiles\": [\"string\"], \"author\": \"string\", \"branch\": \"string\", \"createdat\": \"number (timestamp in nanoseconds)\", \"message\": \"string\", \"tag\": \"string\" }, \"headowner\": \"string\", \"id\": \"number\", \"labels\": [\"string\"], \"mergeable\": \"boolean - indicates whether the PR can be merged\", \"title\": \"string\", \"undiverged\": \"boolean - indicates whether the PR is up to date with the target branch\" }, \"push\": { // For Git push events, this contains the pushed commit. // For Pull Request events, // this contains the head commit or merge commit if available (merge" }, { "data": "\"affectedfiles\": [\"string\"], \"author\": \"string\", \"branch\": \"string\", \"createdat\": \"number (timestamp in nanoseconds)\", \"message\": \"string\", \"tag\": \"string\" }, \"stack\": { \"additionalprojectglobs\": [\"string - list of arbitrary, user-defined selectors\"], \"administrative\": \"boolean\", \"autodeploy\": \"boolean\", \"branch\": \"string\", \"id\": \"string\", \"labels\": [\"string - list of arbitrary, user-defined selectors\"], \"lockedby\": \"optional string - if the stack is locked, this is the name of the user who did it\", \"name\": \"string\", \"namespace\": \"string - repository namespace, only relevant to GitLab repositories\", \"projectroot\": \"optional string - project root as set on the Stack, if any\", \"repository\": \"string\", \"state\": \"string\", \"terraformversion\": \"string or null\", \"trackedcommit\": { \"author\": \"string\", \"branch\": \"string\", \"createdat\": \"number (timestamp in nanoseconds)\", \"hash\": \"string\", \"message\": \"string\" }, \"workerpool\": { \"public\": \"boolean - indicates whether the worker pool is public or not\" } }, \"vcsintegration\": { \"id\": \"string - ID of the VCS integration\", \"name\": \"string - name of the VCS integration\", \"provider\": \"string - possible values are AZUREDEVOPS, BITBUCKETCLOUD, BITBUCKETDATACENTER, GIT, GITHUB, GITHUBENTERPRISE, GITLAB\", \"description\": \"string - description of the VCS integration\", \"isdefault\": \"boolean - indicates whether the VCS integration is the default one or Space-level\", \"space\": { \"id\": \"string\", \"labels\": [\"string\"], \"name\": \"string\" }, \"labels\": [\"string - list of arbitrary, user-defined selectors\"], \"updatedat\": \"number (timestamp in nanoseconds)\", \"createdat\": \"number (timestamp in nanoseconds)\" } } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94``` ``` { \"in_progress\": [{ \"basedonlocal_workspace\": \"boolean - whether the run stems from a local preview\", \"branch\": \"string - the branch this run is based on\", \"created_at\": \"number - creation Unix timestamp in nanoseconds\", \"triggered_by\": \"string or null - user or trigger policy who triggered the run, if applicable\", \"type\": \"string - run type: proposed, tracked, task, etc.\", \"state\": \"string - run state: queued, unconfirmed, etc.\", \"updated_at\": \"number - last update Unix timestamp in nanoseconds\", \"userprovidedmetadata\": [\"string - blobs of metadata provided using spacectl or the API when interacting with this run\"] }], \"pull_request\": { \"action\": \"string - opened, reopened, closed, merged, edited, labeled, synchronize, unlabeled\", \"action_initiator\": \"string\", \"approved\": \"boolean - indicates whether the PR has been approved\", \"author\": \"string\", \"base\": { \"affected_files\": [\"string\"], \"author\": \"string\", \"branch\": \"string\", \"created_at\": \"number (timestamp in nanoseconds)\", \"message\": \"string\", \"tag\": \"string\" }, \"closed\": \"boolean\", \"diff\": [\"string - list of files changed between base and head commit\"], \"draft\": \"boolean - indicates whether the PR is marked as draft\", \"head\": { \"affected_files\": [\"string\"], \"author\": \"string\", \"branch\": \"string\", \"created_at\": \"number (timestamp in nanoseconds)\", \"message\": \"string\", \"tag\": \"string\" }, \"head_owner\": \"string\", \"id\": \"number\", \"labels\": [\"string\"], \"mergeable\": \"boolean - indicates whether the PR can be merged\", \"title\": \"string\", \"undiverged\": \"boolean - indicates whether the PR is up to date with the target branch\" }, \"push\": { // For Git push events, this contains the pushed commit. // For Pull Request events, // this contains the head commit or merge commit if available (merge" }, { "data": "\"affected_files\": [\"string\"], \"author\": \"string\", \"branch\": \"string\", \"created_at\": \"number (timestamp in nanoseconds)\", \"message\": \"string\", \"tag\": \"string\" }, \"stack\": { \"additionalprojectglobs\": [\"string - list of arbitrary, user-defined selectors\"], \"administrative\": \"boolean\", \"autodeploy\": \"boolean\", \"branch\": \"string\", \"id\": \"string\", \"labels\": [\"string - list of arbitrary, user-defined selectors\"], \"locked_by\": \"optional string - if the stack is locked, this is the name of the user who did it\", \"name\": \"string\", \"namespace\": \"string - repository namespace, only relevant to GitLab repositories\", \"project_root\": \"optional string - project root as set on the Stack, if any\", \"repository\": \"string\", \"state\": \"string\", \"terraform_version\": \"string or null\", \"tracked_commit\": { \"author\": \"string\", \"branch\": \"string\", \"created_at\": \"number (timestamp in nanoseconds)\", \"hash\": \"string\", \"message\": \"string\" }, \"worker_pool\": { \"public\": \"boolean - indicates whether the worker pool is public or not\" } }, \"vcs_integration\": { \"id\": \"string - ID of the VCS integration\", \"name\": \"string - name of the VCS integration\", \"provider\": \"string - possible values are AZUREDEVOPS, BITBUCKETCLOUD, BITBUCKETDATACENTER, GIT, GITHUB, GITHUBENTERPRISE, GITLAB\", \"description\": \"string - description of the VCS integration\", \"is_default\": \"boolean - indicates whether the VCS integration is the default one or Space-level\", \"space\": { \"id\": \"string\", \"labels\": [\"string\"], \"name\": \"string\" }, \"labels\": [\"string - list of arbitrary, user-defined selectors\"], \"updated_at\": \"number (timestamp in nanoseconds)\", \"created_at\": \"number (timestamp in nanoseconds)\" } } ``` When triggered by a new module version, this is the schema of the data input that each policy request will receive: | 0 | 1 | |:--|:--| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | { \"module\": { // Module for which the new version was released \"administrative\": \"boolean - is the stack administrative\", \"branch\": \"string - tracked branch of the module\", \"labels\": a message on Slack.\", \"isdefault\": false, \"labels\": [\"bitbucketcloud\", \"paymentsorg\"], \"space\": { \"id\": \"paymentsteamspace-01HN0BF3GMYZQ4NYVNQ1RKQ9M7\", \"labels\": [], \"name\": \"PaymentsTeamSpace\" }, \"createdat\": 1706187931079960000, \"updated_at\": 1706274820310231000 } } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54``` ``` { \"module\": { // Module for which the new version was released \"administrative\": \"boolean - is the stack administrative\", \"branch\": \"string - tracked branch of the module\", \"labels\": [\"string - list of arbitrary, user-defined selectors\"], \"current_version\": \"Newly released module version\", \"id\": \"string - unique ID of the module\", \"name\": \"string - name of the stack\", \"namespace\": \"string - repository namespace, only relevant to GitLab repositories\", \"project_root\": \"optional string - project root as set on the Module, if any\", \"repository\": \"string - name of the source GitHub repository\", \"space\": { \"id\": \"string\", \"labels\": [\"string\"], \"name\": \"string\" }, \"terraform_version\": \"string or null - last Terraform version used to apply changes\", \"worker_pool\": { \"id\": \"string - the worker pool ID, if it is private\", \"labels\": [\"string - list of arbitrary, user-defined selectors, if the worker pool is private\"], \"name\": \"string - name of the worker pool, if it is private\", \"public\": \"boolean - is the worker pool public\" } }, \"pull_request\": { \"action\": \"string - opened, reopened, closed, merged, edited, labeled, synchronize, unlabeled\", \"action_initiator\": \"string\", \"approved\": \"boolean - indicates whether the PR has been approved\", \"author\": \"string\", \"base\": { \"affected_files\": [\"string\"], \"author\": \"string\", \"branch\": \"string\", \"created_at\": \"number (timestamp in nanoseconds)\", \"message\": \"string\", \"tag\": \"string\" } }, \"vcs_integration\": { \"id\": \"bitbucket-for-payments-team\", \"name\": \"Bitbucket for Payments Team\", \"provider\": \"BITBUCKET_CLOUD\", \"description\": \"### Payments Team BB integration\\n\\nThis integration should be only used by the Payments Integrations team. If you need access, drop a message on Slack.\", \"is_default\": false, \"labels\": [\"bitbucketcloud\", \"paymentsorg\"], \"space\": { \"id\": \"paymentsteamspace-01HN0BF3GMYZQ4NYVNQ1RKQ9M7\", \"labels\": [], \"name\": \"PaymentsTeamSpace\" }, \"created_at\": 1706187931079960000, \"updated_at\": 1706274820310231000 } } ``` Based on this input, the policy may define boolean track, propose and ignore rules. The positive outcome of at least one ignore rule causes the push to be ignored, no matter the outcome of other rules. The positive outcome of at least one track rule triggers a tracked run. The positive outcome of at least one propose rule triggers a proposed run. Warning If no rules are matched, the default is to ignore the push. Therefore it is important to always supply an exhaustive set of policies - that is, making sure that they define what to track and what to propose in addition to defining what they ignore. It is also possible to define an auxiliary rule called ignore_track, which overrides a positive outcome of the track rule but does not affect other rules, most notably the propose one. This can be used to turn some of the pushes that would otherwise be applied into test runs. Tip We maintain a library of example policies that are ready to use or that you could tweak to meet your specific" }, { "data": "If you cannot find what you are looking for below or in the library, please reach out to our support and we will craft a policy to do exactly what you need. Ignoring changes to certain paths is something you'd find useful both with classic monorepos and repositories containing multiple Terraform projects under different paths. When evaluating a push, we determine the list of affected files by looking at all the files touched by any of the commits in a given push. Info This list may include false positives - eg. in a situation where you delete a given file in one commit, then bring it back in another commit, and then push multiple commits at once. This is a safer default than trying to figure out the exact scope of each push. Let's imagine a situation where you only want to look at changes to Terraform definitions - in HCL or JSON - inside one the production/ or modules/ directory, and have track and propose use their default settings: | 0 | 1 | |:--|:| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | package spacelift track { input.push.branch == input.stack.branch } propose { input.push.branch != \"\" } ignore { not affected } affected { some i, j, k trackeddirectories := {\"modules/\", \"production/\"} trackedextensions := {\".tf\", \".tf.json\"} path := input.push.affectedfiles[i] startswith(path, trackeddirectories[j]) endswith(path, tracked_extensions[k]) } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17``` ``` package spacelift track { input.push.branch == input.stack.branch } propose { input.push.branch != \"\" } ignore { not affected } affected { some i, j, k tracked_directories := {\"modules/\", \"production/\"} tracked_extensions := {\".tf\", \".tf.json\"} path := input.push.affected_files[i] startswith(path, tracked_directories[j]) endswith(path, tracked_extensions[k]) } ``` As an aside, note that in order to keep the example readable we had to define ignore in a negative way as per the Anna Karenina principle. A minimal example of this policy is available here. By default when the push policy instructs Spacelift to ignore a certain change, no commit status check is sent back to the VCS. This behavior is explicitly designed to prevent noise in monorepo scenarios where a large number of stacks are linked to the same Git repo. However, in certain cases one may still be interested in learning that the push was ignored, or just getting a commit status check for a given stack when it's set as required as part of GitHub branch protection set of rules, or simply your internal organization rules. In that case, you may find the notify rule useful. The purpose of this rule is to override default notification settings. So if you want to notify your VCS vendor even when a commit is ignored, you can define it like this: | 0 | 1 | |:-|:--| | 1 2 3 4 5 | package spacelift # other rules (including ignore), see above notify { ignore } | ``` 1 2 3 4 5``` ``` package spacelift notify { ignore } ``` Info The notify rule (false by default) only applies to ignored pushes, so you can't set it to false to silence commit status checks for proposed runs. Another possible use case of a Git push policy would be to apply from a newly created tag rather than from a" }, { "data": "This in turn can be useful in multiple scenarios - for example, a staging/QA environment could be deployed every time a certain tag type is applied to a tested branch, thereby providing inline feedback on a GitHub Pull Request from the actual deployment rather than a plan/test. One could also constrain production to only apply from tags unless a Run is explicitly triggered by the user. Here's an example of one such policy: | 0 | 1 | |:--|:-| | 1 2 3 4 | package spacelift track { re_match(`^\\d+\\.\\d+\\.\\d+$`, input.push.tag) } propose { input.push.branch != input.stack.branch } | ``` 1 2 3 4``` ``` package spacelift track { re_match(`^\\d+\\.\\d+\\.\\d+$`, input.push.tag) } propose { input.push.branch != input.stack.branch } ``` If no Git push policies are attached to a stack or a module, the default behavior is equivalent to this policy: | 0 | 1 | |:--|:-| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | package spacelift track { affected input.push.branch == input.stack.branch } propose { affected } propose { affectedpr } ignore { not affected not affectedpr } ignore { input.push.tag != \"\" } affected { filepath := input.push.affectedfiles[] startswith(normalizepath(filepath), normalizepath(input.stack.projectroot)) } affected { filepath := input.push.affectedfiles[] globpattern := input.stack.additionalprojectglobs[] glob.match(globpattern, [\"/\"], normalizepath(filepath)) } affectedpr { filepath := input.pullrequest.diff[] startswith(normalizepath(filepath), normalizepath(input.stack.projectroot)) } affectedpr { filepath := input.pullrequest.diff[] globpattern := input.stack.additionalprojectglobs[] glob.match(globpattern, [\"/\"], normalizepath(filepath)) } # Helper function to normalize paths by removing leading slashes normalize_path(path) = trim(path, \"/\") | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40``` ``` package spacelift track { affected input.push.branch == input.stack.branch } propose { affected } propose { affected_pr } ignore { not affected not affected_pr } ignore { input.push.tag != \"\" } affected { filepath := input.push.affectedfiles[] startswith(normalizepath(filepath), normalizepath(input.stack.project_root)) } affected { filepath := input.push.affectedfiles[] globpattern := input.stack.additionalprojectglobs[] glob.match(globpattern, [\"/\"], normalizepath(filepath)) } affected_pr { filepath := input.pullrequest.diff[] startswith(normalizepath(filepath), normalizepath(input.stack.project_root)) } affected_pr { filepath := input.pullrequest.diff[] globpattern := input.stack.additionalprojectglobs[] glob.match(globpattern, [\"/\"], normalizepath(filepath)) } normalize_path(path) = trim(path, \"/\") ``` There are cases where you want pushes to your repo to trigger a run in Spacelift, but only after a CI/CD pipeline (or a part of it) has completed. An example would be when you want to trigger an infra deploy after some docker image has been built and pushed to a registry. This is achievable via push policies by using the External Dependencies feature. Although we generally recommend using our default scheduling order (tracked runs and tasks, then proposed runs, then drift detection runs), you can also use push policies to prioritize certain runs over others. For example, you may want to prioritize runs triggered by a certain user or a certain branch. To that effect, you can use the boolean prioritize rule to mark a run as prioritized. Here's an example: | 0 | 1 | |:-|:-| | 1 2 3 4 5 | package spacelift # other rules (including ignore), see above prioritize { input.stack.labels[_] == \"prioritize\" } | ``` 1 2 3 4 5``` ``` package spacelift prioritize {" }, { "data": "== \"prioritize\" } ``` The above example will prioritize runs on any stack that has the prioritize label set. Please note that run prioritization only works for private worker pools. An attempt to prioritize a run on a public worker pool using this policy will be a no-op. Stack locking can be particularly useful in workflows heavily reliant on pull requests. The push policy enables you to lock and unlock a stack based on specific criteria using the lock and unlock rules. lock rule behavior when a non-empty string is returned: unlock rule behavior when a non-empty string is returned: Info Note that runs are only rejected if the push policy rules result in an attempt to acquire a lock on an already locked stack with a different lock key. If the lock rule is undefined or results in an empty string, runs will not be rejected. Below is an example policy snippet which locks a stack when a pull request is opened or synchronized, and unlocks it when the pull request is closed or merged. Ensure you have added import future.keywords to your policy to use this exact snippet. | 0 | 1 | |:|:-| | 1 2 3 4 5 6 7 8 9 | lockid := sprintf(\"PRID%d\", [input.pullrequest.id]) lock := lockid { input.pullrequest.action in [\"opened\", \"synchronize\"] } unlock := lockid { input.pullrequest.action in [\"closed\", \"merged\"] } | ``` 1 2 3 4 5 6 7 8 9``` ``` lockid := sprintf(\"PRID%d\", [input.pullrequest.id]) lock := lock_id { input.pull_request.action in [\"opened\", \"synchronize\"] } unlock := lock_id { input.pull_request.action in [\"closed\", \"merged\"] } ``` You can further customise this selectively locking and unlocking the stacks whose project root or project globs are set to track the files in the pull request. Here is an example of that: | 0 | 1 | |:--|:--| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | lockid := sprintf(\"PRID%d\", [input.pullrequest.id]) lock := lockid if { input.pullrequest.action in [\"opened\", \"synchronize\"] affectedpr } unlock := lockid if { input.pullrequest.action in [\"closed\", \"merged\"] affectedpr } affectedpr if { some filepath in input.pullrequest.diff startswith(filepath, input.stack.projectroot) } affectedpr if { some filepath in input.pullrequest.diff some globpattern in input.stack.additionalprojectglobs glob.match(glob_pattern, [\"/\"], filepath) } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22``` ``` lockid := sprintf(\"PRID%d\", [input.pullrequest.id]) lock := lock_id if { input.pull_request.action in [\"opened\", \"synchronize\"] affected_pr } unlock := lock_id if { input.pull_request.action in [\"closed\", \"merged\"] affected_pr } affected_pr if { some filepath in input.pull_request.diff startswith(filepath, input.stack.project_root) } affected_pr if { some filepath in input.pull_request.diff some globpattern in input.stack.additionalproject_globs glob.match(glob_pattern, [\"/\"], filepath) } ``` Futhermore, with the release of this functionality, you can also lock and unlock through comments. Here is an example: | 0 | 1 | |:--|:--| | 1 2 3 4 | unlock := lockid { input.pullrequest.action == \"commented\" input.pull_request.comment == concat(\" \", [\"/spacelift\", \"unlock\", input.stack.id]) } | ``` 1 2 3 4``` ``` unlock := lock_id { input.pull_request.action == \"commented\" input.pull_request.comment == concat(\" \", [\"/spacelift\", \"unlock\", input.stack.id]) } ``` Using the above addition to your push policy, you can then unlock your stack by commenting something such as: | 0 | 1 | |-:|:| | 1 | /spacelift unlock my-stack-id | ``` 1``` ``` /spacelift unlock my-stack-id ```" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Tekton Pipelines", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "You are viewing the documentation of the lastest versions of Tekton components. Tekton is a cloud-native solution for building CI/CD systems. It consists of Tekton Pipelines, which provides the building blocks, and of supporting components, such as Tekton CLI and Tekton Catalog, that make Tekton a complete ecosystem. Tekton is part of the CD Foundation, a Linux Foundation project. For more information, see the Overview of Tekton. Get started with Tekton Installation instructions for Tekton components Result storage for Tekton CI/CD data. Building Blocks of Tekton CI/CD Workflow Conceptual and technical information about Tekton Event Based Triggers for Tekton Pipelines Guides to help you complete a specific goal Command-Line Interface Web-based UI for Tekton Pipelines and Tekton Triggers resources Reusable Task and Pipeline Resources Manage Tekton CI/CD Building Blocks Artifact signatures and attestations for Tekton Contribution guidelines Was this page helpful?" } ]
{ "category": "App Definition and Development", "file_name": "getting-started.md", "project_name": "Tekton Pipelines", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "You are viewing the documentation of the lastest versions of Tekton components. Welcome to Tekton. Tekton is an open-source cloud native CICD (Continuous Integration and Continuous Delivery/Deployment) solution. Check the Concepts section to learn more about how Tekton works. Lets get started! You can go ahead and create your first task with Tekton. If you prefer, watch the following videos to learn the basics of how Tekton works before your first hands-on experience: Set up and run your first Tekton Task Create and run your first Tekton Pipeline Create and run your first Tekton Trigger. Create and sign artifact provenance with Tekton Chains Was this page helpful?" } ]
{ "category": "App Definition and Development", "file_name": "docs.testkube.io.md", "project_name": "Testkube", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Welcome to our documentation! This is the place where you'll find everything you need to get ramped up and start testing with Testkube. Testkube is a Kubernetes-native testing framework for Testers, Developers, and DevOps practitioners that allows you to automate the executions of your existing testing tools inside your Kubernetes cluster, removing all the complexity from your CI/CD pipelines. Get up and running by installing the Testkube CLI and its components within minutes. Learn how to create, run, and display the results for your first test. Incorporate Testkube into your CI/CD environment with the tools you already use. To start testing with Testkube, choose your favorite testing tool: Cypress k6 Postman Ginkgo JMeter Gradle Maven Artillery Playwright KubePug cURL" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Unleash", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Welcome to Unleash's documentation, your one-stop shop for everything Unleash. Whether you're just getting started or have been using Unleash for years, you should be able to find answers to all your questions here. Dive in and start exploringgreat things await. If you need a hand, our team is just a click away. Have questions that you can't find the answer to in these docs? If you've got questions or want to chat with the team and other Unleash users join our Slack community or a GitHub Discussion. Our Slack tends to be more active, but you're welcome to use whatever works best for you. Ask our AI intern (still in training, but not bad). The \"Ask AI\" button lives in the bottom right corner on every page. You can also follow us on Twitter, LinkedIn and visit our website for ongoing updates and content." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "werf", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "werf follows the principles of the IaC (Infrastructure as Code) approach and encourages the user to store the project delivery configuration along with the application code in Git and to use external dependencies responsibly. This is accomplished by a mechanism called giterminism. A typical project configuration includes several files: werf.yaml is the main configuration file of a project in werf. Its primary purpose is to bind build and deploy instructions. These instructions are defined for each application component. They can be in two formats: Refer to the Build section of the documentation for more details on the assembly configuration. These instructions are defined for the entire application (and all deployment environments) and should take the form of a Helm Chart. Refer to the Deploy section of the documentation for details on the deployment configuration. ``` project: app configVersion: 1 image: backend context: backend dockerfile: Dockerfile image: frontend context: frontend dockerfile: Dockerfile ``` ``` $ tree -a . .helm templates NOTES.txt _helpers.tpl deployment.yaml hpa.yaml ingress.yaml service.yaml serviceaccount.yaml values.yaml backend Dockerfile ... frontend Dockerfile ... werf.yaml ```" } ]
{ "category": "App Definition and Development", "file_name": "workflows.md", "project_name": "Woodpecker CI", "subcategory": "Continuous Integration & Delivery" }
[ { "data": "Woodpecker is a simple yet powerful CI/CD engine with great extensibility. It focuses on executing pipelines inside containers. If you are already using containers in your daily workflow, you'll for sure love Woodpecker. ``` steps: - name: build image: debian commands: - echo \"This is the build step\" - name: a-test-step image: debian commands: - echo \"Testing..\"``` ``` steps: - name: build- image: debian+ image: mycompany/image-with-awscli commands: - aws help``` ``` steps: - name: build image: debian commands: - touch myfile - name: a-test-step image: debian commands: - cat myfile``` ``` FROM laszlocloud/kubectlCOPY deploy /usr/local/deployENTRYPOINT [\"/usr/local/deploy\"]``` ``` kubectl apply -f $PLUGIN_TEMPLATE``` ``` steps: deploy-to-k8s: image: laszlocloud/my-k8s-plugin settings: template: config/k8s/service.yaml``` See plugin docs." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 3.3.6 is an update to the Hadoop 3.3.x release branch. Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. Starting from this release, Hadoop publishes Software Bill of Materials (SBOM) using CycloneDX Maven plugin. For more information about SBOM, please go to SBOM. HDFS Router-Router Based Federation now supports storing delegation tokens on MySQL, HADOOP-18535 which improves token operation throughput over the original Zookeeper-based implementation. HADOOP-18671 moved a number of HDFS-specific APIs to Hadoop Common to make it possible for certain applications that depend on HDFS semantics to run on other Hadoop compatible file systems. In particular, recoverLease() and isFileClosed() are exposed through LeaseRecoverable interface. While setSafeMode() is exposed through SafeMode interface. The abfs has a critical bug fix HADOOP-18546. ABFS. Disable purging list of in-progress reads in abfs stream close(). All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade or disable prefetching by setting fs.azure.readaheadqueue.depth to 0 Consult the parent JIRA HADOOP-18521 ABFS ReadBufferManager buffer sharing across concurrent HTTP requests for root cause analysis, details on what is affected, and mitigations. HADOOP-18103. High performance vectored read API in Hadoop The PositionedReadable interface has now added an operation for Vectored IO (also known as Scatter/Gather IO): ``` void readVectored(List<? extends FileRange> ranges, IntFunction<ByteBuffer> allocate) ``` All the requested ranges will be retrieved into the supplied byte buffers -possibly asynchronously, possibly in parallel, with results potentially coming in out-of-order. Benchmarking of enhanced Apache ORC and Apache Parquet clients through file:// and s3a:// show significant improvements in query performance. Further Reading: FsDataInputStream. Hadoop Vectored IO: Your Data Just Got Faster! Apachecon 2022 talk. The new Intermediate Manifest Committer uses a manifest file to commit the work of successful task attempts, rather than renaming directories. Job commit is matter of reading all the manifests, creating the destination directories (parallelized) and renaming the files, again in parallel. This is both fast and correct on Azure Storage and Google GCS, and should be used there instead of the classic v1/v2 file output committers. It is also safe to use on HDFS, where it should be faster than the v1 committer. It is however optimized for cloud storage where list and rename operations are significantly slower; the benefits may be less. More details are available in the manifest committer. documentation. HDFS-16400, HDFS-16399, HDFS-16396, HDFS-16397, HDFS-16413, HDFS-16457. A number of Datanode configuration options can be changed without having to restart the datanode. This makes it possible to tune deployment configurations without cluster-wide Datanode Restarts. See DataNode.java for the list of dynamically reconfigurable attributes. A lot of dependencies have been upgraded to address recent CVEs. Many of the CVEs were not actually exploitable through the Hadoop so much of this work is just due" }, { "data": "However applications which have all the library is on a class path may be vulnerable, and the ugprades should also reduce the number of false positives security scanners report. We have not been able to upgrade every single dependency to the latest version there is. Some of those changes are fundamentally incompatible. If you have concerns about the state of a specific library, consult the Apache JIRA issue tracker to see if an issue has been filed, discussions have taken place about the library in question, and whether or not there is already a fix in the pipeline. Please dont file new JIRAs about dependency-X.Y.Z having a CVE without searching for any existing issue first As an open-source project, contributions in this area are always welcome, especially in testing the active branches, testing applications downstream of those branches and of whether updated dependencies trigger regressions. Hadoop HDFS is a distributed filesystem allowing remote callers to read and write data. Hadoop YARN is a distributed job submission/execution engine allowing remote callers to submit arbitrary work into the cluster. Unless a Hadoop cluster is deployed with caller authentication with Kerberos, anyone with network access to the servers has unrestricted access to the data and the ability to run whatever code they want in the system. In production, there are generally three deployment patterns which can, with care, keep data and computing resources private. 1. Physical cluster: configure Hadoop security, usually bonded to the enterprise Kerberos/Active Directory systems. Good. 1. Cloud: transient or persistent single or multiple user/tenant cluster with private VLAN and security. Good. Consider Apache Knox for managing remote access to the cluster. 1. Cloud: transient single user/tenant cluster with private VLAN and no security at all. Requires careful network configuration as this is the sole means of securing the cluster.. Consider Apache Knox for managing remote access to the cluster. If you deploy a Hadoop cluster in-cloud without security, and without configuring a VLAN to restrict access to trusted users, you are implicitly sharing your data and computing resources with anyone with network access If you do deploy an insecure cluster this way then port scanners will inevitably find it and submit crypto-mining jobs. If this happens to you, please do not report this as a CVE or security issue: it is utterly predictable. Secure your cluster if you want to remain exclusively your cluster. Finally, if you are using Hadoop as a service deployed/managed by someone else, do determine what security their products offer and make sure it meets your requirements. The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation. Before deploying Hadoop in production, read Hadoop in Secure Mode, and follow its instructions to secure your cluster." } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.3.3.4.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. okhttp has been updated to address CVE-2021-0341 Apache Xerces has been updated to 2.12.2 to fix CVE-2022-23437 We have recently become aware that libraries which include a shaded apache httpclient libraries (hadoop-client-runtime.jar, aws-java-sdk-bundle.jar, gcs-connector-shaded.jar, cos_api-bundle-5.6.19.jar) all load and use the unshaded resource mozilla/public-suffix-list.txt. If an out of date version of this is found on the classpath first, attempts to negotiate TLS connections may fail with the error Certificate doesnt match any of the subject alternative names. This release does not declare the hadoop-cos library to be a dependency of the hadoop-cloud-storage POM, so applications depending on that module are no longer exposed to this issue. If an application requires use of the hadoop-cos module, please declare an explicit dependency. Downgrades Jackson from 2.13.2 to 2.12.7 to fix class conflicts in downstream projects. This version of jackson does contain the fix for CVE-2020-36518. Netty has been updated to address CVE-2019-20444, CVE-2019-20445 and CVE-2022-24823 The AWS SDK has been updated to 1.12.262 to address jackson CVE-2018-7489" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.3.3.5.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 3.3.6 is an update to the Hadoop 3.3.x release branch. Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. Starting from this release, Hadoop publishes Software Bill of Materials (SBOM) using CycloneDX Maven plugin. For more information about SBOM, please go to SBOM. HDFS Router-Router Based Federation now supports storing delegation tokens on MySQL, HADOOP-18535 which improves token operation throughput over the original Zookeeper-based implementation. HADOOP-18671 moved a number of HDFS-specific APIs to Hadoop Common to make it possible for certain applications that depend on HDFS semantics to run on other Hadoop compatible file systems. In particular, recoverLease() and isFileClosed() are exposed through LeaseRecoverable interface. While setSafeMode() is exposed through SafeMode interface. The abfs has a critical bug fix HADOOP-18546. ABFS. Disable purging list of in-progress reads in abfs stream close(). All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade or disable prefetching by setting fs.azure.readaheadqueue.depth to 0 Consult the parent JIRA HADOOP-18521 ABFS ReadBufferManager buffer sharing across concurrent HTTP requests for root cause analysis, details on what is affected, and mitigations. HADOOP-18103. High performance vectored read API in Hadoop The PositionedReadable interface has now added an operation for Vectored IO (also known as Scatter/Gather IO): ``` void readVectored(List<? extends FileRange> ranges, IntFunction<ByteBuffer> allocate) ``` All the requested ranges will be retrieved into the supplied byte buffers -possibly asynchronously, possibly in parallel, with results potentially coming in out-of-order. Benchmarking of enhanced Apache ORC and Apache Parquet clients through file:// and s3a:// show significant improvements in query performance. Further Reading: FsDataInputStream. Hadoop Vectored IO: Your Data Just Got Faster! Apachecon 2022 talk. The new Intermediate Manifest Committer uses a manifest file to commit the work of successful task attempts, rather than renaming directories. Job commit is matter of reading all the manifests, creating the destination directories (parallelized) and renaming the files, again in parallel. This is both fast and correct on Azure Storage and Google GCS, and should be used there instead of the classic v1/v2 file output committers. It is also safe to use on HDFS, where it should be faster than the v1 committer. It is however optimized for cloud storage where list and rename operations are significantly slower; the benefits may be less. More details are available in the manifest committer. documentation. HDFS-16400, HDFS-16399, HDFS-16396, HDFS-16397, HDFS-16413, HDFS-16457. A number of Datanode configuration options can be changed without having to restart the datanode. This makes it possible to tune deployment configurations without cluster-wide Datanode Restarts. See DataNode.java for the list of dynamically reconfigurable attributes. A lot of dependencies have been upgraded to address recent CVEs. Many of the CVEs were not actually exploitable through the Hadoop so much of this work is just due" }, { "data": "However applications which have all the library is on a class path may be vulnerable, and the ugprades should also reduce the number of false positives security scanners report. We have not been able to upgrade every single dependency to the latest version there is. Some of those changes are fundamentally incompatible. If you have concerns about the state of a specific library, consult the Apache JIRA issue tracker to see if an issue has been filed, discussions have taken place about the library in question, and whether or not there is already a fix in the pipeline. Please dont file new JIRAs about dependency-X.Y.Z having a CVE without searching for any existing issue first As an open-source project, contributions in this area are always welcome, especially in testing the active branches, testing applications downstream of those branches and of whether updated dependencies trigger regressions. Hadoop HDFS is a distributed filesystem allowing remote callers to read and write data. Hadoop YARN is a distributed job submission/execution engine allowing remote callers to submit arbitrary work into the cluster. Unless a Hadoop cluster is deployed with caller authentication with Kerberos, anyone with network access to the servers has unrestricted access to the data and the ability to run whatever code they want in the system. In production, there are generally three deployment patterns which can, with care, keep data and computing resources private. 1. Physical cluster: configure Hadoop security, usually bonded to the enterprise Kerberos/Active Directory systems. Good. 1. Cloud: transient or persistent single or multiple user/tenant cluster with private VLAN and security. Good. Consider Apache Knox for managing remote access to the cluster. 1. Cloud: transient single user/tenant cluster with private VLAN and no security at all. Requires careful network configuration as this is the sole means of securing the cluster.. Consider Apache Knox for managing remote access to the cluster. If you deploy a Hadoop cluster in-cloud without security, and without configuring a VLAN to restrict access to trusted users, you are implicitly sharing your data and computing resources with anyone with network access If you do deploy an insecure cluster this way then port scanners will inevitably find it and submit crypto-mining jobs. If this happens to you, please do not report this as a CVE or security issue: it is utterly predictable. Secure your cluster if you want to remain exclusively your cluster. Finally, if you are using Hadoop as a service deployed/managed by someone else, do determine what security their products offer and make sure it meets your requirements. The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation. Before deploying Hadoop in production, read Hadoop in Secure Mode, and follow its instructions to secure your cluster." } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.3.4.0.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 3.4.0 is an update to the Hadoop 3.4.x release branch. Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. HADOOP-18073 S3A: Upgrade AWS SDK to V2 This release upgrade Hadoops AWS connector S3A from AWS SDK for Java V1 to AWS SDK for Java V2. This is a significant change which offers a number of new features including the ability to work with Amazon S3 Express One Zone Storage - the new high performance, single AZ storage class. HDFS-15382 Split one FsDatasetImpl lock to volume grain locks. Throughput is one of the core performance evaluation for DataNode instance. However, it does not reach the best performance especially for Federation deploy all the time although there are different improvement, because of the global coarse-grain lock. These series issues (include HDFS-16534, HDFS-16511, HDFS-15382 and HDFS-16429.) try to split the global coarse-grain lock to fine-grain lock which is double level lock for blockpool and volume, to improve the throughput and avoid lock impacts between blockpools and volumes. YARN-5597 YARN Federation improvements. We have enhanced the YARN Federation functionality for improved usability. The enhanced features are as follows: 1. YARN Router now boasts a full implementation of all interfaces including the ApplicationClientProtocol, ResourceManagerAdministrationProtocol, and RMWebServiceProtocol. 2. YARN Router support for application cleanup and automatic offline mechanisms for subCluster. 3. Code improvements were undertaken for the Router and AMRMProxy, along with enhancements to previously pending functionalities. 4. Audit logs and Metrics for Router received upgrades. 5. A boost in cluster security features was achieved, with the inclusion of Kerberos support. 6. The page function of the router has been enhanced. 7. A set of commands has been added to the Router side for operating on SubClusters and Policies. YARN-10496 Support Flexible Auto Queue Creation in Capacity Scheduler Capacity Scheduler resource distribution mode was extended with a new allocation mode called weight mode. Defining queue capacities with weights allows the users to use the newly added flexible queue auto creation mode. Flexible mode now supports the dynamic creation of both parent queues and leaf queues, enabling the creation of complex queue hierarchies application submission time. YARN-10888 New capacity modes for Capacity Scheduler Capacity Schedulers resource distribution was completely refactored to be more flexible and extensible. There is a new concept called Capacity Vectors, which allows the users to mix various resource types in the hierarchy, and also in a single queue. With this optionally enabled feature it is now possible to define different resources with different units, like memory with GBs, vcores with percentage values, and GPUs/FPGAs with weights, all in the same" }, { "data": "YARN-10889 Queue Creation in Capacity Scheduler - Various improvements In addition to the two new features above, there were a number of commits for improvements and bug fixes in Capacity Scheduler. The HDFS RBF functionality has undergone significant enhancements, encompassing over 200 commits for feature improvements, new functionalities, and bug fixes. Important features and improvements are as follows: Feature HDFS-15294 HDFS Federation balance tool introduces one tool to balance data across different namespace. HDFS-13522, HDFS-16767 Support observer node from Router-Based Federation. Improvement HADOOP-13144, HDFS-13274, HDFS-15757 These tickets have enhanced IPC throughput between Router and NameNode via multiple connections per user, and optimized connection management. HDFS-14090 RBF: Improved isolation for downstream name nodes. {Static} Router supports assignment of the dedicated number of RPC handlers to achieve isolation for all downstream nameservices it is configured to proxy. Since large or busy clusters may have relatively higher RPC traffic to the namenode compared to other clusters namenodes, this feature if enabled allows admins to configure higher number of RPC handlers for busy clusters. HDFS-17128 RBF: SQLDelegationTokenSecretManager should use version of tokens updated by other routers. The SQLDelegationTokenSecretManager enhances performance by maintaining processed tokens in memory. However, there is a potential issue of router cache inconsistency due to token loading and renewal. This issue has been addressed by the resolution of HDFS-17128. HDFS-17148 RBF: SQLDelegationTokenSecretManager must cleanup expired tokens in SQL. SQLDelegationTokenSecretManager, while fetching and temporarily storing tokens from SQL in a memory cache with a short TTL, faces an issue where expired tokens are not efficiently cleaned up, leading to a buildup of expired tokens in the SQL database. This issue has been addressed by the resolution of HDFS-17148. Others Other changes to HDFS RBF include WebUI, command line, and other improvements. Please refer to the release document. HDFS EC has made code improvements and fixed some bugs. Important improvements and bugs are as follows: Improvement HDFS-16613 EC: Improve performance of decommissioning dn with many ec blocks. In a hdfs cluster with a lot of EC blocks, decommission a dn is very slow. The reason is unlike replication blocks can be replicated from any dn which has the same block replication, the ec block have to be replicated from the decommissioning dn. The configurations dfs.namenode.replication.max-streams and dfs.namenode.replication.max-streams-hard-limit will limit the replication speed, but increase these configurations will create risk to the whole clusters network. So it should add a new configuration to limit the decommissioning dn, distinguished from the cluster wide max-streams limit. HDFS-16663 EC: Allow block reconstruction pending timeout refreshable to increase decommission performance. In HDFS-16613, increase the value of dfs.namenode.replication.max-streams-hard-limit would maximize the IO performance of the decommissioning DN, which has a lot of EC blocks. Besides this, we also need to decrease the value of dfs.namenode.reconstruction.pending.timeout-sec, default is 5 minutes, to shorten the interval time for checking" }, { "data": "Or the decommissioning node would be idle to wait for copy tasks in most of this 5 minutes. In decommission progress, we may need to reconfigure these 2 parameters several times. In HDFS-14560, the dfs.namenode.replication.max-streams-hard-limit can already be reconfigured dynamically without namenode restart. And the dfs.namenode.reconstruction.pending.timeout-sec parameter also need to be reconfigured dynamically. Bug HDFS-16456 EC: Decommission a rack with only on dn will fail when the rack number is equal with replication. In below scenario, decommission will fail by TOOMANYNODESONRACK reason: - Enable EC policy, such as RS-6-3-1024k. - The rack number in this cluster is equal with or less than the replication number(9) - A rack only has one DN, and decommission this DN. This issue has been addressed by the resolution of HDFS-16456. HDFS-17094 EC: Fix bug in block recovery when there are stale datanodes. During block recovery, the RecoveryTaskStriped in the datanode expects a one-to-one correspondence between rBlock.getLocations() and rBlock.getBlockIndices(). However, if there are stale locations during a NameNode heartbeat, this correspondence may be disrupted. Specifically, although there are no stale locations in recoveryLocations, the block indices array remains complete. This discrepancy causes BlockRecoveryWorker.RecoveryTaskStriped#recover to generate an incorrect internal block ID, leading to a failure in the recovery process as the corresponding datanode cannot locate the replica. This issue has been addressed by the resolution of HDFS-17094. HDFS-17284. EC: Fix int overflow in calculating numEcReplicatedTasks and numReplicationTasks during block recovery. Due to an integer overflow in the calculation of numReplicationTasks or numEcReplicatedTasks, the NameNodes configuration parameter dfs.namenode.replication.max-streams-hard-limit failed to take effect. This led to an excessive number of tasks being sent to the DataNodes, consequently occupying too much of their memory. This issue has been addressed by the resolution of HDFS-17284. Others Other improvements and fixes for HDFS EC, Please refer to the release document. A lot of dependencies have been upgraded to address recent CVEs. Many of the CVEs were not actually exploitable through the Hadoop so much of this work is just due diligence. However, applications which have all the library is on a class path may be vulnerable, and the upgrades should also reduce the number of false positives security scanners report. We have not been able to upgrade every single dependency to the latest version there is. Some of those changes are fundamentally incompatible. If you have concerns about the state of a specific library, consult the Apache JIRA issue tracker to see if an issue has been filed, discussions have taken place about the library in question, and whether or not there is already a fix in the pipeline. Please dont file new JIRAs about" }, { "data": "having a CVE without searching for any existing issue first As an open-source project, contributions in this area are always welcome, especially in testing the active branches, testing applications downstream of those branches and of whether updated dependencies trigger regressions. Hadoop HDFS is a distributed filesystem allowing remote callers to read and write data. Hadoop YARN is a distributed job submission/execution engine allowing remote callers to submit arbitrary work into the cluster. Unless a Hadoop cluster is deployed with caller authentication with Kerberos, anyone with network access to the servers has unrestricted access to the data and the ability to run whatever code they want in the system. In production, there are generally three deployment patterns which can, with care, keep data and computing resources private. 1. Physical cluster: configure Hadoop security, usually bonded to the enterprise Kerberos/Active Directory systems. Good. 2. Cloud: transient or persistent single or multiple user/tenant cluster with private VLAN and security. Good. Consider Apache Knox for managing remote access to the cluster. 3. Cloud: transient single user/tenant cluster with private VLAN and no security at all. Requires careful network configuration as this is the sole means of securing the cluster.. Consider Apache Knox for managing remote access to the cluster. If you deploy a Hadoop cluster in-cloud without security, and without configuring a VLAN to restrict access to trusted users, you are implicitly sharing your data and computing resources with anyone with network access If you do deploy an insecure cluster this way then port scanners will inevitably find it and submit crypto-mining jobs. If this happens to you, please do not report this as a CVE or security issue: it is utterly predictable. Secure your cluster if you want to remain exclusively your cluster. Finally, if you are using Hadoop as a service deployed/managed by someone else, do determine what security their products offer and make sure it meets your requirements. In HADOOP-18197, we upgraded the Protobuf in hadoop-thirdparty to version 3.21.12. This version may have compatibility issues with certain versions of JDK8, and you may encounter some errors (please refer to the discussion in HADOOP-18197 for specific details). To address this issue, we recommend upgrading the JDK version in your production environment to a higher version (> JDK8). We will resolve this issue by upgrading hadoop-thirdpartys Protobuf to a higher version in a future release of 3.4.x. Please note that we will discontinue support for JDK8 in future releases of 3.4.x. The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation. Before deploying Hadoop in production, read Hadoop in Secure Mode, and follow its instructions to secure your cluster." } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.3.2.4.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "| JIRA | Summary | Priority | Component | Reporter | Contributor | |:-|:--|:--|:|:|:| | HDFS-15380 | RBF: Could not fetch real remote IP in RouterWebHdfsMethods | Major | webhdfs | Tao Li | Tao Li | | HDFS-15814 | Make some parameters configurable for DataNodeDiskMetrics | Major | hdfs | Tao Li | Tao Li | | HDFS-16265 | Refactor HDFS tool tests for better reuse | Blocker | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HADOOP-17956 | Replace all default Charset usage with UTF-8 | Major | common | Viraj Jasani | Viraj Jasani | | HDFS-16278 | Make HDFS snapshot tools cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16285 | Make HDFS ownership tools cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16419 | Make HDFS data transfer tools cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16511 | Improve lock type for ReplicaMap under fine-grain lock mode. | Major | hdfs | Mingxiang Li | Mingxiang Li | | HDFS-16534 | Split datanode block pool locks to volume grain. | Major | datanode | Mingxiang Li | Mingxiang Li | | HADOOP-18219 | Fix shadedclient test failure | Blocker | test | Gautham Banasandra | Akira Ajisaka | | HADOOP-18621 | CryptoOutputStream::close leak when encrypted zones + quota exceptions | Critical | fs | Colm Dougan | Colm Dougan | | YARN-5597 | YARN Federation improvements | Major | federation | Subramaniam Krishnan | Subramaniam Krishnan | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:|:|:--|:--|:--|:--| | HADOOP-17010 | Add queue capacity weights support in FairCallQueue | Major | ipc | Fengnan Li | Fengnan Li | | HDFS-15288 | Add Available Space Rack Fault Tolerant BPP | Major | block placement | Ayush Saxena | Ayush Saxena | | HDFS-13183 | Standby NameNode process getBlocks request to reduce Active load | Major | balancer & mover, namenode | Xiaoqiao He | Xiaoqiao He | | HDFS-15463 | Add a tool to validate FsImage | Major | namenode | Tsz-wo Sze | Tsz-wo Sze | | HADOOP-17165 | Implement service-user feature in DecayRPCScheduler | Major | rpc-server | Takanobu Asanuma | Takanobu Asanuma | | HADOOP-15891 | Provide Regex Based Mount Point In Inode Tree | Major | viewfs | zhenzhao wang | zhenzhao wang | | HDFS-15025 | Applying NVDIMM storage media to HDFS | Major | datanode, hdfs | YaYun Wang | YaYun Wang | | HDFS-15098 | Add SM4 encryption method for HDFS | Major | hdfs | liusheng | liusheng | | HADOOP-17125 | Using snappy-java in SnappyCodec | Major | common | DB Tsai | L. C. Hsieh | | HDFS-15294 | Federation balance tool | Major | rbf, tools | Jinglun | Jinglun | | HADOOP-17292 | Using lz4-java in Lz4Codec | Major | common | L. C. Hsieh | L. C. Hsieh | | HDFS-14090 | RBF: Improved isolation for downstream name" }, { "data": "{Static} | Major | rbf | CR Hota | Fengnan Li | | HDFS-15711 | Add Metrics to HttpFS Server | Major | httpfs | Ahmed Hussein | Ahmed Hussein | | HADOOP-16492 | Support HuaweiCloud Object Storage as a Hadoop Backend File System | Major | fs | zhongjun | lixianwei | | HDFS-15759 | EC: Verify EC reconstruction correctness on DataNode | Major | datanode, ec, erasure-coding | Toshihiko Uchida | Toshihiko Uchida | | HDFS-15970 | Print network topology on the web | Minor | namanode, ui | Tao Li | Tao Li | | HDFS-16048 | RBF: Print network topology on the router web | Minor | rbf | Tao Li | Tao Li | | HDFS-13916 | Distcp SnapshotDiff to support WebHDFS | Major | distcp, webhdfs | Xun REN | Xun REN | | HDFS-16203 | Discover datanodes with unbalanced block pool usage by the standard deviation | Major | datanode, ui | Tao Li | Tao Li | | HADOOP-18003 | Add a method appendIfAbsent for CallerContext | Minor | common | Tao Li | Tao Li | | HDFS-16337 | Show start time of Datanode on Web | Minor | datanode, ui | Tao Li | Tao Li | | HDFS-16331 | Make dfs.blockreport.intervalMsec reconfigurable | Major | datanode | Tao Li | Tao Li | | HDFS-16371 | Exclude slow disks when choosing volume | Major | datanode | Tao Li | Tao Li | | HADOOP-18055 | Async Profiler endpoint for Hadoop daemons | Major | common | Viraj Jasani | Viraj Jasani | | HDFS-16400 | Reconfig DataXceiver parameters for datanode | Major | datanode | Tao Li | Tao Li | | HDFS-16399 | Reconfig cache report parameters for datanode | Major | datanode | Tao Li | Tao Li | | HDFS-16398 | Reconfig block report parameters for datanode | Major | datanode | Tao Li | Tao Li | | HDFS-16451 | RBF: Add search box for Routers tab-mounttable web page | Minor | rbf | Max Xie | Max Xie | | HDFS-16396 | Reconfig slow peer parameters for datanode | Major | datanode | Tao Li | Tao Li | | HDFS-16397 | Reconfig slow disk parameters for datanode | Major | datanode | Tao Li | Tao Li | | YARN-11084 | Introduce new config to specify AM default node-label when not specified | Major | nodeattibute | Junfan Zhang | Junfan Zhang | | YARN-11069 | Dynamic Queue ACL handling in Legacy and Flexible Auto Created Queues | Major | capacity scheduler, yarn | Tamas Domok | Tamas Domok | | HDFS-16413 | Reconfig dfs usage parameters for datanode | Major | datanode | Tao Li | Tao Li | | HDFS-16521 | DFS API to retrieve slow datanodes | Major | datanode, dfsclient | Viraj Jasani | Viraj Jasani | | HDFS-16568 | dfsadmin -reconfig option to start/query reconfig on all live datanodes | Major | dfsadmin | Viraj Jasani | Viraj Jasani | | HDFS-16582 | Expose aggregate latency of slow node as perceived by the reporting node | Major | metrics | Viraj Jasani | Viraj Jasani | | HDFS-16595 | Slow peer metrics - add median, mad and upper latency limits | Major | metrics | Viraj Jasani | Viraj Jasani | | HADOOP-18345 | Enhance client protocol to propagate last seen state IDs for multiple nameservices. | Major | common | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | YARN-11241 | Add uncleaning option for local app log file with log-aggregation enabled | Major | log-aggregation | Ashutosh Gupta | Ashutosh Gupta | | YARN-11255 | Support loading alternative docker client config from system environment | Major | yarn | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16858 | Dynamically adjust max slow disks to exclude | Major | datanode | dingshun | dingshun | | HADOOP-18671 | Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem | Major | fs | Wei-Chiu Chuang | Tak-Lon (Stephen) Wu | | HDFS-16965 | Add switch to decide whether to enable native" }, { "data": "| Minor | erasure-coding | WangYuanben | WangYuanben | | MAPREDUCE-7432 | Make Manifest Committer the default for abfs and gcs | Major | client | Steve Loughran | Steve Loughran | | HDFS-17113 | Reconfig transfer and write bandwidth for datanode. | Major | datanode | WangYuanben | WangYuanben | | MAPREDUCE-7456 | Extend add-opens flag to container launch commands on JDK17 nodes | Major | build, mrv2 | Peter Szucs | Peter Szucs | | MAPREDUCE-7449 | Add add-opens flag to container launch commands on JDK17 nodes | Major | mr-am, mrv2 | Benjamin Teke | Benjamin Teke | | HDFS-17063 | Support to configure different capacity reserved for each disk of DataNode. | Minor | datanode, hdfs | Jiale Qi | Jiale Qi | | HDFS-17294 | Reconfigure the scheduling cycle of the slowPeerCollectorDaemon thread. | Major | configuration | Zhaobo Huang | Zhaobo Huang | | HDFS-17301 | Add read and write dataXceiver threads count metrics to datanode. | Major | datanode | Zhaobo Huang | Zhaobo Huang | | HADOOP-19017 | Setup pre-commit CI for Windows 10 | Critical | build | Gautham Banasandra | Gautham Banasandra | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:|:--|:--|:-|:-|:-| | HDFS-15245 | Improve JournalNode web UI | Major | journal-node, ui | Jianfei Jiang | Jianfei Jiang | | MAPREDUCE-7241 | FileInputFormat listStatus with less memory footprint | Major | job submission | Zhihua Deng | Zhihua Deng | | HDFS-15242 | Add metrics for operations hold lock times of FsDatasetImpl | Major | datanode | Xiaoqiao He | Xiaoqiao He | | HADOOP-16952 | Add" }, { "data": "to gitignore | Minor | build | Ayush Saxena | Ayush Saxena | | HADOOP-16954 | Add -S option in Count command to show only Snapshot Counts | Major | common | Hemanth Boyina | Hemanth Boyina | | YARN-10063 | Usage output of container-executor binary needs to include http/https argument | Minor | yarn | Siddharth Ahuja | Siddharth Ahuja | | YARN-10212 | Create separate configuration for max global AM attempts | Major | am | Jonathan Hung | Bilwa S T | | HDFS-15261 | RBF: Add Block Related Metrics | Major | metrics, rbf | Ayush Saxena | Ayush Saxena | | HDFS-15247 | RBF: Provide Non DFS Used per DataNode in DataNode UI | Major | datanode, rbf, ui | Ayush Saxena | Lisheng Sun | | YARN-5277 | When localizers fail due to resource timestamps being out, provide more diagnostics | Major | nodemanager | Steve Loughran | Siddharth Ahuja | | YARN-9995 | Code cleanup in TestSchedConfCLI | Minor | test | Szilard Nemeth | Bilwa S T | | YARN-9354 | Resources should be created with ResourceTypesTestHelper instead of TestUtils | Trivial | resourcemanager | Szilard Nemeth | Andras Gyori | | YARN-10002 | Code cleanup and improvements in ConfigurationStoreBaseTest | Minor | test, yarn | Szilard Nemeth | Benjamin Teke | | HDFS-15277 | Parent directory in the explorer does not support all path formats | Minor | ui | Jianfei Jiang | Jianfei Jiang | | HADOOP-16951 | Tidy Up Text and ByteWritables Classes | Minor | common | David Mollitor | David Mollitor | | YARN-9954 | Configurable max application tags and max tag length | Major | resourcemanager | Jonathan Hung | Bilwa S T | | YARN-10001 | Add explanation of unimplemented methods in InMemoryConfigurationStore | Major | capacity scheduler | Szilard Nemeth | Siddharth Ahuja | | MAPREDUCE-7199 | HsJobsBlock reuse JobACLsManager for checkAccess | Minor | mrv2 | Bibin Chundatt | Bilwa S T | | HDFS-15217 | Add more information to longest write/read lock held log | Major | namanode | Toshihiro Suzuki | Toshihiro Suzuki | | HADOOP-17001 | The suffix name of the unified compression class | Major | io | bianqi | bianqi | | YARN-9997 | Code cleanup in ZKConfigurationStore | Minor | resourcemanager | Szilard Nemeth | Andras Gyori | | YARN-9996 | Code cleanup in QueueAdminConfigurationMutationACLPolicy | Major | resourcemanager | Szilard Nemeth | Siddharth Ahuja | | YARN-9998 | Code cleanup in LeveldbConfigurationStore | Minor | resourcemanager | Szilard Nemeth | Benjamin Teke | | YARN-9999 | TestFSSchedulerConfigurationStore: Extend from ConfigurationStoreBaseTest, general code cleanup | Minor | test | Szilard Nemeth | Benjamin Teke | | HDFS-15295 | AvailableSpaceBlockPlacementPolicy should use chooseRandomWithStorageTypeTwoTrial() for better performance. | Minor | block placement | Jinglun | Jinglun | | HADOOP-16054 | Update Dockerfile to use Bionic | Major | build, test | Akira Ajisaka | Akira Ajisaka | | YARN-10189 | Code cleanup in LeveldbRMStateStore | Minor | resourcemanager | Benjamin Teke | Benjamin Teke | | HADOOP-16886 | Add hadoop.http.idle_timeout.ms to core-default.xml | Major | conf | Wei-Chiu Chuang | Lisheng Sun | | HDFS-15328 | Use DFSConfigKeys MONITORCLASSDEFAULT constant | Minor | hdfs | bianqi | bianqi | | HDFS-14283 | DFSInputStream to prefer cached replica | Major | hdfs-client | Wei-Chiu Chuang | Lisheng Sun | | HDFS-15347 | Replace the deprecated method shaHex | Minor | balancer & mover | bianqi | bianqi | | HDFS-15338 | listOpenFiles() should throw InvalidPathException in case of invalid paths | Minor | namanode | Jinglun | Jinglun | | YARN-10160 | Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo | Major | capacity scheduler, webapp | Prabhu Joseph | Prabhu Joseph | | HDFS-15350 | Set dfs.client.failover.random.order to true as default | Major | hdfs-client | Takanobu Asanuma | Takanobu Asanuma | | HDFS-15255 | Consider StorageType when DatanodeManager#sortLocatedBlock() | Major | hdfs | Lisheng Sun | Lisheng Sun | | HDFS-15345 | RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442 | Major | rbf | Xiaoyu Yao | Xiaoyu Yao | | YARN-10260 | Allow transitioning queue from DRAINING to RUNNING state | Major | capacity scheduler | Jonathan Hung | Bilwa S T | | HDFS-15344 | DataNode#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442 | Major | datanode | Xiaoyu Yao | Xiaoyu Yao | | HADOOP-14254 | Add a Distcp option to preserve Erasure Coding attributes | Major | tools/distcp | Wei-Chiu Chuang | Ayush Saxena | | HADOOP-16965 | Introduce StreamContext for Abfs Input and Output streams. | Major | fs/azure | Mukund Thakur | Mukund Thakur | | HADOOP-17036 | TestFTPFileSystem failing as ftp server dir already exists | Minor | fs, test | Steve Loughran | Mikhail Pryakhin | | HDFS-15356 | Unify configuration `dfs.ha.allow.stale.reads` to DFSConfigKeys | Major | hdfs | Xiaoqiao He | Xiaoqiao He | | HDFS-15358 | RBF: Unify router datanode UI with namenode datanode UI | Major | datanode, rbf, ui | Ayush Saxena | Ayush Saxena | | HADOOP-17042 | Hadoop distcp throws ERROR: Tools helper" }, { "data": "was not found | Minor | tools/distcp | Aki Tanaka | Aki Tanaka | | HDFS-15202 | HDFS-client: boost ShortCircuit Cache | Minor | dfsclient | Danil Lipovoy | Danil Lipovoy | | HDFS-15207 | VolumeScanner skip to scan blocks accessed during recent scan peroid | Minor | datanode | Yang Yun | Yang Yun | | HDFS-14999 | Avoid Potential Infinite Loop in DFSNetworkTopology | Major | dfs | Ayush Saxena | Ayush Saxena | | HDFS-15353 | Use sudo instead of su to allow nologin user for secure DataNode | Major | datanode, security | Akira Ajisaka | Kei Kori | | HDFS-13639 | SlotReleaser is not fast enough | Major | hdfs-client | Gang Xie | Lisheng Sun | | HDFS-15369 | Refactor method VolumeScanner#runLoop() | Minor | datanode | Yang Yun | Yang Yun | | HDFS-15355 | Make the default block storage policy ID configurable | Minor | block placement, namenode | Yang Yun | Yang Yun | | HDFS-15368 | TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally | Major | balancer, test | Xiaoqiao He | Xiaoqiao He | | HADOOP-14698 | Make copyFromLocals -t option available for put as well | Major | common | Andras Bokor | Andras Bokor | | HDFS-10792 | RedundantEditLogInputStream should log caught exceptions | Minor | namenode | Wei-Chiu Chuang | Wei-Chiu Chuang | | YARN-6492 | Generate queue metrics for each partition | Major | capacity scheduler | Jonathan Hung | Manikandan R | | HADOOP-16828 | Zookeeper Delegation Token Manager fetch sequence number by batch | Major | security | Fengnan Li | Fengnan Li | | HDFS-14960 | TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology | Minor | hdfs | Jim Brennan | Jim Brennan | | HDFS-15359 | EC: Allow closing a file with committed blocks | Major | erasure-coding | Ayush Saxena | Ayush Saxena | | HADOOP-17047 | TODO comments exist in trunk while the related issues are already fixed. | Trivial | fs | Rungroj Maipradit | Rungroj Maipradit | | HDFS-15376 | Update the error about command line POST in httpfs documentation | Major | httpfs | bianqi | bianqi | | HDFS-15406 | Improve the speed of Datanode Block Scan | Major | datanode | Hemanth Boyina | Hemanth Boyina | | HADOOP-17009 | Embrace Immutability of Java Collections | Minor | common | David Mollitor | David Mollitor | | YARN-9460 | QueueACLsManager and ReservationsACLManager should not use instanceof checks | Major | resourcemanager | Szilard Nemeth | Bilwa S T | | YARN-10321 | Break down TestUserGroupMappingPlacementRule#testMapping into test scenarios | Minor | test | Szilard Nemeth | Szilard Nemeth | | HDFS-15383 | RBF: Disable watch in ZKDelegationSecretManager for performance | Major | rbf | Fengnan Li | Fengnan Li | | HDFS-15416 | Improve DataStorage#addStorageLocations() for empty locations | Major | datanode | JiangHua Zhu | JiangHua Zhu | | HADOOP-17090 | Increase precommit job timeout from 5 hours to 20 hours | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17084 | Update Dockerfile_aarch64 to use Bionic | Major | build, test | RuiChen | zhaorenhai | | HDFS-15312 | Apply umask when creating directory by WebHDFS | Minor | webhdfs | Ye Ni | Ye Ni | | HDFS-15425 | Review Logging of DFSClient | Minor | dfsclient | Hongbing Wang | Hongbing Wang | | YARN-8047 | RMWebApp make external class pluggable | Minor | resourcemanager, webapp | Bibin Chundatt | Bilwa S T | | YARN-10333 | YarnClient obtain Delegation Token for Log Aggregation Path | Major | log-aggregation | Prabhu Joseph | Prabhu Joseph | | HADOOP-17079 | Optimize UGI#getGroups by adding UGI#getGroupsSet | Major | build | Xiaoyu Yao | Xiaoyu Yao | | YARN-10297 | TestContinuousScheduling#testFairSchedulerContinuousSchedulingInitTime fails intermittently | Major | fairscheduler, test | Jonathan Hung | Jim Brennan | | HDFS-15371 | Nonstandard characters exist in NameNode.java | Minor | namanode | JiangHua Zhu | Zhao Yi Ming | | HADOOP-17127 | Use" }, { "data": "to initialize rpc queueTime and processingTime | Minor | common | Jim Brennan | Jim Brennan | | HDFS-15385 | Upgrade boost library to 1.72 | Critical | build, libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-16930 | Add com.amazonaws.auth.profile.ProfileCredentialsProvider to hadoop-aws docs | Minor | documentation, fs/s3 | Nicholas Chammas | Nicholas Chammas | | HDFS-15476 | Make AsyncStream class executor_ member private | Minor | build, libhdfs++ | Suraj Naik | Suraj Naik | | HDFS-15381 | Fix typo corrputBlocksFiles to corruptBlocksFiles | Trivial | hdfs | bianqi | bianqi | | HDFS-15404 | ShellCommandFencer should expose info about source | Major | ha, tools | Chen Liang | Chen Liang | | HADOOP-17147 | Dead link in hadoop-kms/index.md.vm | Minor | documentation, kms | Akira Ajisaka | Xieming Li | | HADOOP-17113 | Adding ReadAhead Counters in ABFS | Major | fs/azure | Mehakmeet Singh | Mehakmeet Singh | | YARN-10319 | Record Last N Scheduler Activities from ActivitiesManager | Major | activitiesmanager, resourcemanager, router | Prabhu Joseph | Prabhu Joseph | | HADOOP-17141 | Add Capability To Get Text Length | Minor | common, io | David Mollitor | David Mollitor | | YARN-10208 | Add capacityScheduler metric for NODE_UPDATE interval | Minor | capacity scheduler, metrics | Pranjal Protim Borah | Pranjal Protim Borah | | YARN-10343 | Legacy RM UI should include labeled metrics for allocated, total, and reserved resources. | Major | resourcemanager, ui | Eric Payne | Eric Payne | | YARN-1529 | Add Localization overhead metrics to NM | Major | nodemanager | Gera Shegalov | Jim Brennan | | YARN-10381 | Send out application attempt state along with other elements in the application attempt object returned from appattempts REST API call | Minor | yarn-ui-v2 | Siddharth Ahuja | Siddharth Ahuja | | YARN-10361 | Make custom DAO classes configurable into RMWebApp#JAXBContextResolver | Major | resourcemanager, webapp | Prabhu Joseph | Bilwa S T | | HDFS-15512 | Remove smallBufferSize in DFSClient | Minor | dfsclient | Takanobu Asanuma | Takanobu Asanuma | | YARN-10251 | Show extended resources on legacy RM UI. | Major | . | Eric Payne | Eric Payne | | HDFS-15520 | Use visitor pattern to visit namespace tree | Major | namenode | Tsz-wo Sze | Tsz-wo Sze | | YARN-10389 | Option to override RMWebServices with custom WebService class | Major | resourcemanager, webservice | Prabhu Joseph | Tanu Ajmera | | HDFS-15493 | Update block map and name cache in parallel while loading fsimage. | Major | namenode | Chengwei Wang | Chengwei Wang | | HADOOP-17206 | Add python2 to required package on CentOS 8 for building documentation | Minor | documentation | Masatake Iwasaki | Masatake Iwasaki | | HDFS-15519 | Check inaccessible INodes in FsImageValidation | Major | tools | Tsz-wo Sze | Tsz-wo Sze | | YARN-10399 | Refactor NodeQueueLoadMonitor class to make it extendable | Minor | resourcemanager | Zhengbo Li | Zhengbo Li | | HDFS-15448 | Remove duplicate BlockPoolManager starting when run DataNode | Major | datanode | JiangHua Zhu | JiangHua Zhu | | HADOOP-17159 | Make UGI support forceful relogin from keytab ignoring the last login time | Major | security | Sandeep Guggilam | Sandeep Guggilam | | HADOOP-17232 | Erasure Coding: Typo in document | Trivial | documentation | Hui Fei | Hui Fei | | HDFS-15550 | Remove unused imports from" }, { "data": "| Minor | test | Ravuri Sushma sree | Ravuri Sushma sree | | YARN-10342 | [UI1] Provide a way to hide Tools section in Web UIv1 | Minor | ui | Andras Gyori | Andras Gyori | | YARN-10407 | Add phantomjsdriver.log to gitignore | Minor | yarn | Takanobu Asanuma | Takanobu Asanuma | | HADOOP-17235 | Erasure Coding: Remove dead code from common side | Minor | erasure-coding | Hui Fei | Hui Fei | | YARN-9136 | getNMResourceInfo NodeManager REST API method is not documented | Major | documentation, nodemanager | Szilard Nemeth | Hudky Mrton Gyula | | YARN-10353 | Log vcores used and cumulative cpu in containers monitor | Minor | yarn | Jim Brennan | Jim Brennan | | YARN-10369 | Make NMTokenSecretManagerInRM sending NMToken for nodeId DEBUG | Minor | yarn | Jim Brennan | Jim Brennan | | HDFS-14694 | Call recoverLease on DFSOutputStream close exception | Major | hdfs-client | Chen Zhang | Lisheng Sun | | YARN-10390 | LeafQueue: retain user limits cache across assignContainers() calls | Major | capacity scheduler, capacityscheduler | Muhammad Samir Khan | Muhammad Samir Khan | | HDFS-15574 | Remove unnecessary sort of block list in DirectoryScanner | Major | datanode | Stephen ODonnell | Stephen ODonnell | | HADOOP-17208 | LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances | Major | common | Xiaoyu Yao | Xiaoyu Yao | | HADOOP-17270 | Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended scenario | Minor | test | Masatake Iwasaki | Masatake Iwasaki | | HDFS-15579 | RBF: The constructor of PathLocation may got some misunderstanding | Minor | rbf | Janus Chow | Janus Chow | | HDFS-15554 | RBF: force router check file existence in destinations before adding/updating mount points | Minor | rbf | Fengnan Li | Fengnan Li | | HADOOP-17259 | Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath | Major | common | Xiaoyu Yao | Xiaoyu Yao | | HDFS-15581 | Access Controlled HTTPFS Proxy | Minor | httpfs | Richard | Richard | | HDFS-15557 | Log the reason why a storage log file cant be deleted | Minor | hdfs | Ye Ni | Ye Ni | | YARN-6754 | Fair scheduler docs should explain meaning of weight=0 for a queue | Major | docs | Daniel Templeton | Takeru Kuramoto | | HADOOP-17283 | Hadoop - Upgrade to JQuery 3.5.1 | Major | build, common | Aryan Gupta | Aryan Gupta | | HADOOP-17282 | libzstd-dev should be used instead of libzstd1-dev on Ubuntu 18.04 or higher | Minor | common | Takeru Kuramoto | Takeru Kuramoto | | HDFS-15594 | Lazy calculate live datanodes in safe mode tip | Minor | namenode | Ye Ni | Ye Ni | | HDFS-15577 | Refactor TestTracing | Major | test | Takanobu Asanuma | Takanobu Asanuma | | HDFS-15530 | RBF: Fix typo in DFSROUTERQUOTACACHEUPDATE_INTERVAL var definition | Minor | rbf | Sha Fanghao | Sha Fanghao | | HDFS-15604 | Fix Typo for HdfsDataNodeAdminGuide doc | Trivial | documentation | Hui Fei | Hui Fei | | HDFS-15603 | RBF: Fix getLocationsForPath twice in create operation | Major | rbf | Zhaohui Wang | Zhaohui Wang | | HADOOP-17284 | Support BCFKS keystores for Hadoop Credential Provider | Major | security | Xiaoyu Yao | Xiaoyu Yao | | HADOOP-17280 | Service-user cost shouldnt be accumulated to totalDecayedCallCost and" }, { "data": "| Major | ipc | Jinglun | Jinglun | | HDFS-15415 | Reduce locking in Datanode DirectoryScanner | Major | datanode | Stephen ODonnell | Stephen ODonnell | | HADOOP-17287 | Support new Instance by non default constructor by ReflectionUtils | Major | common | Baolong Mao | Baolong Mao | | HADOOP-17276 | Extend CallerContext to make it include many items | Major | common, ipc | Hui Fei | Hui Fei | | YARN-10451 | RM (v1) UI NodesPage can NPE when yarn.io/gpu resource type is defined. | Major | resourcemanager, ui | Eric Payne | Eric Payne | | MAPREDUCE-7301 | Expose Mini MR Cluster attribute for testing | Minor | test | Swaroopa Kadam | Swaroopa Kadam | | HDFS-15567 | [SBN Read] HDFS should expose msync() API to allow downstream applications call it explicitly. | Major | ha, hdfs-client | Konstantin Shvachko | Konstantin Shvachko | | HADOOP-17304 | KMS ACL: Allow DeleteKey Operation to Invalidate Cache | Major | kms | Xiaoyu Yao | Xiaoyu Yao | | HDFS-15633 | Avoid redundant RPC calls for getDiskStatus | Major | dfsclient | Ayush Saxena | Ayush Saxena | | YARN-10450 | Add cpu and memory utilization per node and cluster-wide metrics | Minor | yarn | Jim Brennan | Jim Brennan | | HADOOP-17144 | Update Hadoops lz4 to v1.9.2 | Major | build, common | Hemanth Boyina | Hemanth Boyina | | HDFS-15629 | Add seqno when warning slow mirror/disk in BlockReceiver | Major | datanode | Haibin Huang | Haibin Huang | | HADOOP-17302 | Upgrade to jQuery 3.5.1 in hadoop-sls | Major | build, common | Aryan Gupta | Aryan Gupta | | HDFS-15652 | Make block size from NNThroughputBenchmark configurable | Minor | benchmarks | Hui Fei | Hui Fei | | YARN-10475 | Scale RM-NM heartbeat interval based on node utilization | Minor | yarn | Jim Brennan | Jim Brennan | | HDFS-15665 | Balancer logging improvement | Major | balancer & mover | Konstantin Shvachko | Konstantin Shvachko | | HADOOP-17342 | Creating a token identifier should not do kerberos name resolution | Major | common | Jim Brennan | Jim Brennan | | YARN-10479 | RMProxy should retry on SocketTimeout Exceptions | Major | yarn | Jim Brennan | Jim Brennan | | HDFS-15623 | Respect configured values of rpc.engine | Major | hdfs | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | HDFS-15668 | RBF: Fix RouterRPCMetrics annocation and document misplaced error | Minor | documentation | Hongbing Wang | Hongbing Wang | | HADOOP-17369 | Bump up snappy-java to 1.1.8.1 | Minor | common | Masatake Iwasaki | Masatake Iwasaki | | YARN-10480 | replace href tags with ng-href | Trivial | applications-catalog, webapp | Gabriel Medeiros Coelho | Gabriel Medeiros Coelho | | HDFS-15608 | Rename variable DistCp#CLEANUP | Trivial | distcp | JiangHua Zhu | JiangHua Zhu | | HDFS-15469 | Dynamically configure the size of PacketReceiver#MAXPACKETSIZE | Major | hdfs-client | JiangHua Zhu | JiangHua Zhu | | HADOOP-17367 | Add InetAddress api to" }, { "data": "| Major | performance, security | Ahmed Hussein | Ahmed Hussein | | MAPREDUCE-7304 | Enhance the map-reduce Job end notifier to be able to notify the given URL via a custom class | Major | mrv2 | Daniel Fritsi | Zoltn Erdmann | | HDFS-15684 | EC: Call recoverLease on DFSStripedOutputStream close exception | Major | dfsclient, ec | Hongbing Wang | Hongbing Wang | | MAPREDUCE-7309 | Improve performance of reading resource request for mapper/reducers from config | Major | applicationmaster | Wangda Tan | Peter Bacsko | | HDFS-15694 | Avoid calling UpdateHeartBeatState inside DataNodeDescriptor | Major | datanode | Ahmed Hussein | Ahmed Hussein | | HDFS-14904 | Add Option to let Balancer prefer highly utilized nodes in each iteration | Major | balancer & mover | Leon Gao | Leon Gao | | HDFS-15705 | Fix a typo in SecondaryNameNode.java | Trivial | hdfs | Sixiang Ma | Sixiang Ma | | HDFS-15703 | Dont generate edits for set operations that are no-op | Major | namenode | Ahmed Hussein | Ahmed Hussein | | HADOOP-17392 | Remote exception messages should not include the exception class | Major | ipc | Ahmed Hussein | Ahmed Hussein | | HDFS-15706 | HttpFS: Log more information on request failures | Major | httpfs | Ahmed Hussein | Ahmed Hussein | | HADOOP-17389 | KMS should log full UGI principal | Major | kms | Ahmed Hussein | Ahmed Hussein | | HDFS-15221 | Add checking of effective filesystem during initializing storage locations | Minor | datanode | Yang Yun | Yang Yun | | HDFS-15712 | Upgrade googletest to 1.10.0 | Critical | build, libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17425 | Bump up snappy-java to 1.1.8.2 | Minor | build, common | L. C. Hsieh | L. C. Hsieh | | HDFS-15717 | Improve fsck logging | Major | logging, namenode | Ahmed Hussein | Ahmed Hussein | | HDFS-15728 | Update description of dfs.datanode.handler.count in hdfs-default.xml | Minor | configuration | liuyan | liuyan | | HDFS-15704 | Mitigate lease monitors rapid infinite loop | Major | namenode | Ahmed Hussein | Ahmed Hussein | | HDFS-15733 | Add seqno in log when BlockReceiver receive packet | Minor | datanode | Haibin Huang | Haibin Huang | | HDFS-15655 | Add option to make balancer prefer to get cold blocks | Minor | balancer & mover | Yang Yun | Yang Yun | | HDFS-15569 | Speed up the Storage#doRecover during datanode rolling upgrade | Major | datanode | Hemanth Boyina | Hemanth Boyina | | HDFS-15749 | Make size of editPendingQ can be configurable | Major | hdfs | Baolong Mao | Baolong Mao | | HDFS-15745 | Make DataNodePeerMetrics#LOWTHRESHOLDMS and MINOUTLIERDETECTION_NODES configurable | Major | datanode, metrics | Haibin Huang | Haibin Huang | | HDFS-15751 | Add documentation for msync() API to filesystem.md | Major | documentation | Konstantin Shvachko | Konstantin Shvachko | | HDFS-15754 | Create packet metrics for DataNode | Minor | datanode | Fengnan Li | Fengnan Li | | YARN-10538 | Add recommissioning nodes to the list of updated nodes returned to the AM | Major | resourcemanager | Srinivas S T | Srinivas S T | | YARN-10541 | capture the performance metrics of ZKRMStateStore | Minor | resourcemanager | Max Xie | Max Xie | | HADOOP-17408 | Optimize NetworkTopology while sorting of block locations | Major | common, net | Ahmed Hussein | Ahmed Hussein | | YARN-8529 | Add timeout to RouterWebServiceUtil#invokeRMWebService | Major | router, webservice | igo Goiri | Minni Mittal | | YARN-4589 | Diagnostics for localization timeouts is lacking | Major | nodemanager | Chang Li | Chang Li | | YARN-10562 | Follow up changes for YARN-9833 | Major | yarn | Jim Brennan | Jim Brennan | | HDFS-15758 | Fix typos in MutableMetric | Trivial | metrics | Haibin Huang | Haibin Huang | | HDFS-15783 | Speed up BlockPlacementPolicyRackFaultTolerant#verifyBlockPlacement | Major | block placement | Akira Ajisaka | Akira Ajisaka | | YARN-10519 | Refactor QueueMetricsForCustomResources class to move to yarn-common package | Major | metrics | Minni Mittal | Minni Mittal | | YARN-10490 | yarn top command not quitting completely with ctrl+c | Minor | yarn | Agshin Kazimli | Agshin Kazimli | | HADOOP-17478 | Improve the description of" }, { "data": "| Minor | documentation | Akira Ajisaka | Akira Ajisaka | | HADOOP-17452 | Upgrade guice to 4.2.3 | Major | build, common | Yuming Wang | Yuming Wang | | HADOOP-17465 | Update Dockerfile to use Focal | Major | build, test | Gautham Banasandra | Gautham Banasandra | | HDFS-15789 | Lease renewal does not require namesystem lock | Major | hdfs | Jim Brennan | Jim Brennan | | HDFS-15740 | Make basename cross-platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17501 | Fix logging typo in ShutdownHookManager | Major | common | Konstantin Shvachko | Fengnan Li | | HADOOP-17354 | Move Jenkinsfile outside of the root directory | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17508 | Simplify dependency installation instructions | Trivial | documentation | Gautham Banasandra | Gautham Banasandra | | HADOOP-17509 | Parallelize building of dependencies | Minor | build | Gautham Banasandra | Gautham Banasandra | | HDFS-15803 | EC: Remove unnecessary method (getWeight) in StripedReconstructionInfo | Trivial | erasure-coding | Haiyang Hu | Haiyang Hu | | HDFS-15799 | Make DisallowedDatanodeException terse | Minor | hdfs | Richard | Richard | | HDFS-15819 | Fix a codestyle issue for TestQuotaByStorageType | Trivial | hdfs | Baolong Mao | Baolong Mao | | YARN-10610 | Add queuePath to RESTful API for CapacityScheduler consistent with FairScheduler queuePath | Major | capacity scheduler | Qi Zhu | Qi Zhu | | HDFS-15813 | DataStreamer: keep sending heartbeat packets while streaming | Major | hdfs | Jim Brennan | Jim Brennan | | YARN-9650 | Set thread names for CapacityScheduler AsyncScheduleThread | Minor | capacity scheduler | Bibin Chundatt | Amogh Desai | | MAPREDUCE-7319 | Log list of mappers at trace level in ShuffleHandler audit log | Minor | yarn | Jim Brennan | Jim Brennan | | HDFS-15821 | Add metrics for in-service datanodes | Minor | metrics | Zehao Chen | Zehao Chen | | YARN-10625 | FairScheduler: add global flag to disable AM-preemption | Major | fairscheduler | Peter Bacsko | Peter Bacsko | | YARN-10626 | Log resource allocation in NM log at container start time | Major | nodemanager | Eric Badger | Eric Badger | | HDFS-15815 | if required storageType are unavailable, log the failed reason during choosing Datanode | Minor | block placement | Yang Yun | Yang Yun | | HDFS-15830 | Support to make dfs.image.parallel.load reconfigurable | Major | namenode | Hui Fei | Hui Fei | | HDFS-15835 | Erasure coding: Add/remove logs for the better readability/debugging | Minor | erasure-coding, hdfs | Bhavik Patel | Bhavik Patel | | HDFS-15826 | Solve the problem of incorrect progress of delegation tokens when loading FsImage | Major | namanode | JiangHua Zhu | JiangHua Zhu | | HDFS-15734 | [READ] DirectoryScanner#scan need not check StorageType.PROVIDED | Minor | datanode | Yuxuan Wang | Yuxuan Wang | | HADOOP-17538 | Add kms-default.xml and httpfs-default.xml to site index | Minor | documentation | Masatake Iwasaki | Masatake Iwasaki | | YARN-10613 | Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF | Minor | capacity scheduler, scheduler preemption | Eric Payne | Eric Payne | | YARN-10653 | Fixed the findbugs issues introduced by YARN-10647. | Major | test | Qi Zhu | Qi Zhu | | HDFS-15856 | Make write pipeline retry times" }, { "data": "| Minor | hdfs-client | Qi Zhu | Qi Zhu | | MAPREDUCE-7324 | ClientHSSecurityInfo class is in wrong META-INF file | Major | mapreduce-client | Eric Badger | Eric Badger | | HADOOP-17546 | Update Description of hadoop-http-auth-signature-secret in HttpAuthentication.md | Minor | documentation | Ravuri Sushma sree | Ravuri Sushma sree | | YARN-10623 | Capacity scheduler should support refresh queue automatically by a thread policy. | Major | capacity scheduler | Qi Zhu | Qi Zhu | | HADOOP-17552 | Change ipc.client.rpc-timeout.ms from 0 to 120000 by default to avoid potential hang | Major | ipc | Haoze Wu | Haoze Wu | | HDFS-15384 | Document getLocatedBlocks(String src, long start) of DFSClient only return partial blocks | Minor | documentation | Yang Yun | Yang Yun | | YARN-10658 | CapacityScheduler QueueInfo add queue path field to avoid ambiguous QueueName. | Major | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10664 | Allow parameter expansion in NMADMINUSER_ENV | Major | yarn | Jim Brennan | Jim Brennan | | HADOOP-17570 | Apply YETUS-1102 to re-enable GitHub comments | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17514 | Remove trace subcommand from hadoop CLI | Minor | scripts | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-17482 | Remove Commons Logger from FileSystem Class | Minor | common | David Mollitor | David Mollitor | | HDFS-15882 | Fix incorrectly initializing RandomAccessFile based on configuration options | Major | namanode | Xie Lei | Xie Lei | | HDFS-15843 | [libhdfs++] Make write cross platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-10497 | Fix an issue in CapacityScheduler which fails to delete queues | Major | capacity scheduler | Wangda Tan | Wangda Tan | | HADOOP-17594 | DistCp: Expose the JobId for applications executing through run method | Major | tools/distcp | Ayush Saxena | Ayush Saxena | | YARN-10476 | Queue metrics for Unmanaged applications | Minor | resourcemanager | Cyrus Jackson | Cyrus Jackson | | HDFS-15787 | Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease | Major | namenode | Lisheng Sun | Lisheng Sun | | HDFS-15903 | Refactor X-Platform library | Minor | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17599 | Remove NULL checks before instanceof | Minor | common | Jiajun Jiang | Jiajun Jiang | | HDFS-15913 | Remove useless NULL checks before instanceof | Minor | hdfs | Jiajun Jiang | Jiajun Jiang | | HDFS-15907 | Reduce Memory Overhead of AclFeature by avoiding AtomicInteger | Major | namenode | Stephen ODonnell | Stephen ODonnell | | HDFS-15911 | Provide blocks moved count in Balancer iteration result | Major | balancer & mover | Viraj Jasani | Viraj Jasani | | HDFS-15919 | BlockPoolManager should log stack trace if unable to get Namenode addresses | Major | datanode | Stephen ODonnell | Stephen ODonnell | | HADOOP-17133 | Implement HttpServer2 metrics | Major | httpfs, kms | Akira Ajisaka | Akira Ajisaka | | HADOOP-17531 | DistCp: Reduce memory usage on copying huge directories | Critical | tools/distcp | Ayush Saxena | Ayush Saxena | | HDFS-15879 | Exclude slow nodes when choose targets for blocks | Major | block placement | Tao Li | Tao Li | | HDFS-15764 | Notify Namenode missing or new block on disk as soon as possible | Minor | datanode | Yang Yun | Yang Yun | | HADOOP-16870 | Use spotbugs-maven-plugin instead of findbugs-maven-plugin | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17222 | Create socket address leveraging URI cache | Major | common, hdfs-client | Rui Fan | Rui Fan | | YARN-10544 |" }, { "data": "having un-necessary access identifier static final | Trivial | resourcemanager | ANANDA G B | ANANDA G B | | HDFS-15932 | Improve the balancer error message when process exits abnormally. | Major | balancer | Renukaprasad C | Renukaprasad C | | HDFS-15863 | RBF: Validation message to be corrected in FairnessPolicyController | Minor | rbf | Renukaprasad C | Renukaprasad C | | HADOOP-16524 | Automatic keystore reloading for HttpServer2 | Major | common | Kihwal Lee | Borislav Iordanov | | YARN-10726 | Log the size of DelegationTokenRenewer event queue in case of too many pending events | Major | resourcemanager | Qi Zhu | Qi Zhu | | HDFS-15931 | Fix non-static inner classes for better memory management | Major | hdfs | Viraj Jasani | Viraj Jasani | | HADOOP-17371 | Bump Jetty to the latest version 9.4.35 | Major | build, common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-15942 | Increase Quota initialization threads | Major | namenode | Stephen ODonnell | Stephen ODonnell | | HDFS-15909 | Make fnmatch cross platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17613 | Log not flushed fully when daemon shutdown | Major | common | Renukaprasad C | Renukaprasad C | | HDFS-15937 | Reduce memory used during datanode layout upgrade | Major | datanode | Stephen ODonnell | Stephen ODonnell | | HDFS-15955 | Make explicit_bzero cross platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-15962 | Make strcasecmp cross platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17569 | Building native code fails on Fedora 33 | Major | build, common | Kengo Seki | Masatake Iwasaki | | HADOOP-17633 | Bump json-smart to 2.4.2 and nimbus-jose-jwt to 9.8 due to CVEs | Major | auth, build | helen huang | Viraj Jasani | | HADOOP-17620 | DistCp: Use Iterator for listing target directory as well | Major | tools/distcp | Ayush Saxena | Ayush Saxena | | YARN-10743 | Add a policy for not aggregating for containers which are killed because exceeding container log size limit. | Major | nodemanager | Qi Zhu | Qi Zhu | | HDFS-15978 | Solve DatanodeManager#getBlockRecoveryCommand() printing IOException | Trivial | namanode | JiangHua Zhu | JiangHua Zhu | | HDFS-15967 | Improve the log for Short Circuit Local Reads | Minor | datanode | Bhavik Patel | Bhavik Patel | | HADOOP-17675 | LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException | Major | common | Tamas Mate | Istvn Fajth | | HDFS-15934 | Make DirectoryScanner reconcile blocks batch size and interval between batch" }, { "data": "| Major | datanode, diskbalancer | Qi Zhu | Qi Zhu | | HADOOP-11616 | Remove workaround for Curators ChildReaper requiring Guava 15+ | Major | common | Robert Kanter | Viraj Jasani | | HADOOP-17690 | Improve the log for The DecayRpcScheduler | Minor | ipc | Bhavik Patel | Bhavik Patel | | HDFS-16003 | ProcessReport print invalidatedBlocks should judge debug level at first | Minor | namanode | lei w | lei w | | HADOOP-17678 | Dockerfile for building on Centos 7 | Major | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16007 | Deserialization of ReplicaState should avoid throwing ArrayIndexOutOfBoundsException | Major | hdfs | junwen yang | Viraj Jasani | | HADOOP-16822 | Provide source artifacts for hadoop-client-api | Major | build | Karel Kolman | Karel Kolman | | HADOOP-17693 | Dockerfile for building on Centos 8 | Major | build | Gautham Banasandra | Gautham Banasandra | | MAPREDUCE-7343 | Increase the job name max length in mapred job -list | Major | mapreduce-client | Ayush Saxena | Ayush Saxena | | YARN-10737 | Fix typos in CapacityScheduler#schedule. | Minor | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10545 | Improve the readability of diagnostics log in yarn-ui2 web page. | Minor | yarn-ui-v2 | huangkunlun | huangkunlun | | HADOOP-17680 | Allow ProtobufRpcEngine to be extensible | Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | YARN-10763 | Add the number of containers assigned per second metrics to ClusterMetrics | Minor | metrics | chaosju | chaosju | | HDFS-15877 | BlockReconstructionWork should resetTargets() before BlockManager#validateReconstructionWork return false | Minor | block placement | Haiyang Hu | Haiyang Hu | | YARN-10258 | Add metrics for ApplicationsRunning in NodeManager | Minor | nodemanager | ANANDA G B | ANANDA G B | | HDFS-15757 | RBF: Improving Router Connection Management | Major | rbf | Fengnan Li | Fengnan Li | | HDFS-16018 | Optimize the display of hdfs count -e or count -t command | Minor | dfsclient | Hongbing Wang | Hongbing Wang | | YARN-9279 | Remove the old hamlet package | Major | webapp | Akira Ajisaka | Akira Ajisaka | | YARN-10123 | Error message around yarn app -stop/start can be improved to highlight that an implementation at framework level is needed for the stop/start functionality to work | Minor | client, documentation | Siddharth Ahuja | Siddharth Ahuja | | YARN-10753 | Document the removal of FS default queue creation | Major | fairscheduler | Benjamin Teke | Benjamin Teke | | HDFS-15790 | Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist | Critical | ipc | David Mollitor | Vinayakumar B | | HDFS-16024 | RBF: Rename data to the Trash should be based on src locations | Major | rbf | Xiangyi Zhu | Xiangyi Zhu | | HDFS-15971 | Make mkstemp cross platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-15946 | Fix java doc in FSPermissionChecker | Minor | documentation | Tao Li | Tao Li | | HADOOP-17727 | Modularize docker images | Major | build | Gautham Banasandra | Gautham Banasandra | | YARN-10792 | Set Completed AppAttempt LogsLink to Log Server Url | Major | webapp | Prabhu Joseph | Abhinaba Sarkar | | HADOOP-17756 | Increase precommit job timeout from 20 hours to 24" }, { "data": "| Major | build | Takanobu Asanuma | Takanobu Asanuma | | YARN-10802 | Change Capacity Scheduler minimum-user-limit-percent to accept decimal values | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HDFS-16073 | Remove redundant RPC requests for getFileLinkInfo in ClientNamenodeProtocolTranslatorPB | Minor | hdfs-client | lei w | lei w | | HDFS-16074 | Remove an expensive debug string concatenation | Major | hdfs-client | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-17724 | Add Dockerfile for Debian 10 | Major | build | Gautham Banasandra | Gautham Banasandra | | HDFS-15842 | HDFS mover to emit metrics | Major | balancer & mover | Leon Gao | Leon Gao | | HDFS-16080 | RBF: Invoking method in all locations should break the loop after successful result | Minor | rbf | Viraj Jasani | Viraj Jasani | | HDFS-16075 | Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects | Major | hdfs | Viraj Jasani | Viraj Jasani | | MAPREDUCE-7354 | Use empty array constants present in TaskCompletionEvent to avoid creating redundant objects | Minor | mrv2 | Viraj Jasani | Viraj Jasani | | HDFS-16082 | Avoid non-atomic operations on exceptionsSinceLastBalance and failedTimesSinceLastSuccessfulBalance in Balancer | Major | balancer | Viraj Jasani | Viraj Jasani | | HADOOP-17766 | CI for Debian 10 | Major | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16076 | Avoid using slow DataNodes for reading by sorting locations | Major | hdfs | Tao Li | Tao Li | | HDFS-16085 | Move the getPermissionChecker out of the read lock | Minor | namanode | Tao Li | Tao Li | | YARN-10834 | Intra-queue preemption: apps that dont use defined custom resource wont be preempted. | Major | scheduler preemption | Eric Payne | Eric Payne | | HADOOP-17777 | Update clover-maven-plugin version from 3.3.0 to 4.4.1 | Major | build, common | Wanqiang Ji | Wanqiang Ji | | HDFS-16096 | Delete useless method DirectoryWithQuotaFeature#setQuota | Major | hdfs | Xiangyi Zhu | Xiangyi Zhu | | HDFS-16090 | Fine grained locking for datanodeNetworkCounts | Major | datanode | Viraj Jasani | Viraj Jasani | | HADOOP-17778 | CI for Centos 8 | Major | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16086 | Add volume information to datanode log for tracing | Minor | datanode | Tao Li | Tao Li | | YARN-9698 | [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler | Major | capacity scheduler | Weiwei Yang | Weiwei Yang | | HDFS-16101 | Remove unuse variable and IoException in ProvidedStorageMap | Minor | namenode | lei w | lei w | | HADOOP-17749 | Remove lock contention in SelectorPool of SocketIOWithTimeout | Major | common | Xuesen Liang | Xuesen Liang | | HDFS-16114 | the balancer parameters print error | Minor | balancer | jiaguodong | jiaguodong | | HADOOP-17775 | Remove JavaScript package from Docker environment | Major | build | Masatake Iwasaki | Masatake Iwasaki | | HDFS-16088 | Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load | Major | namanode | Tao Li | Tao Li | | HADOOP-17794 | Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS | Major | documentation, kms, security | Akira Ajisaka | Akira Ajisaka | | HDFS-16122 | Fix DistCpContext#toString() | Minor | distcp | Tao Li | Tao Li | | HADOOP-12665 | Document" }, { "data": "| Major | documentation | Arpit Agarwal | Akira Ajisaka | | HDFS-15785 | Datanode to support using DNS to resolve nameservices to IP addresses to get list of namenodes | Major | datanode | Leon Gao | Leon Gao | | HADOOP-17672 | Remove an invalid comment content in the FileContext class | Major | common | JiangHua Zhu | JiangHua Zhu | | YARN-10456 | RM PartitionQueueMetrics records are named QueueMetrics in Simon metrics registry | Major | resourcemanager | Eric Payne | Eric Payne | | HDFS-15650 | Make the socket timeout for computing checksum of striped blocks configurable | Minor | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka | | YARN-10858 | [UI2] YARN-10826 breaks Queue view | Major | yarn-ui-v2 | Andras Gyori | Masatake Iwasaki | | HADOOP-16290 | Enable RpcMetrics units to be configurable | Major | ipc, metrics | Erik Krogen | Viraj Jasani | | YARN-10860 | Make max container per heartbeat configs refreshable | Major | capacity scheduler | Eric Badger | Eric Badger | | HADOOP-17813 | Checkstyle - Allow line length: 100 | Major | common | Akira Ajisaka | Viraj Jasani | | HDFS-16119 | start balancer with parameters -hotBlockTimeInterval xxx is invalid | Minor | balancer | jiaguodong | jiaguodong | | HDFS-16137 | Improve the comments related to FairCallQueue#queues | Minor | ipc | JiangHua Zhu | JiangHua Zhu | | HADOOP-17811 | ABFS ExponentialRetryPolicy doesnt pick up configuration values | Minor | documentation, fs/azure | Brian Frank Loss | Brian Frank Loss | | HADOOP-17819 | Add extensions to ProtobufRpcEngine RequestHeaderProto | Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | HDFS-15936 | Solve BlockSender#sendPacket() does not record SocketTimeout exception | Minor | datanode | JiangHua Zhu | JiangHua Zhu | | YARN-10628 | Add node usage metrics in SLS | Major | scheduler-load-simulator | VADAGA ANANYO RAO | VADAGA ANANYO RAO | | YARN-10663 | Add runningApps stats in SLS | Major | yarn | VADAGA ANANYO RAO | VADAGA ANANYO RAO | | YARN-10856 | Prevent ATS v2 health check REST API call if the ATS service itself is disabled. | Major | yarn-ui-v2 | Siddharth Ahuja | Benjamin Teke | | HADOOP-17815 | Run CI for Centos 7 | Critical | build | Gautham Banasandra | Gautham Banasandra | | YARN-10854 | Support marking inactive node as untracked without configured include path | Major | resourcemanager | Tao Yang | Tao Yang | | HDFS-16149 | Improve the parameter annotation in FairCallQueue#priorityLevels | Minor | ipc | JiangHua Zhu | JiangHua Zhu | | YARN-10874 | Refactor NM ContainerLaunch#getEnvDependenciess unit tests | Minor | yarn | Tamas Domok | Tamas Domok | | HDFS-16146 | All three replicas are lost due to not adding a new DataNode in time | Major | dfsclient | Shuyan Zhang | Shuyan Zhang | | YARN-10355 | Refactor NM ContainerLaunch.java#orderEnvByDependencies | Minor | yarn | Benjamin Teke | Tamas Domok | | YARN-10849 | Clarify testcase documentation for TestServiceAM#testContainersReleasedWhenPreLaunchFails | Minor | test | Szilard Nemeth | Szilard Nemeth | | HDFS-16153 | Avoid evaluation of LOG.debug statement in QuorumJournalManager | Trivial | journal-node | Zhaohui Wang | Zhaohui Wang | | HDFS-16154 | TestMiniJournalCluster failing intermittently because of not reseting UserGroupInformation completely | Minor | journal-node | Zhaohui Wang | Zhaohui Wang | | HADOOP-17837 | Make it easier to debug UnknownHostExceptions from NetUtils.connect | Minor | common | Bryan Beaudreault | Bryan Beaudreault | | HADOOP-17787 | Refactor fetching of credentials in Jenkins | Major | build | Gautham Banasandra | Gautham Banasandra | | HDFS-15976 | Make mkdtemp cross platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16163 | Avoid locking entire blockPinningFailures map | Major | balancer | Viraj Jasani | Viraj Jasani | | HADOOP-17825 | Add BuiltInGzipCompressor | Major | common | L. C. Hsieh | L. C. Hsieh | | HDFS-16162 | Improve DFSUtil#checkProtectedDescendants() related parameter comments | Major | documentation | JiangHua Zhu | JiangHua Zhu | | HDFS-16160 | Improve the parameter annotation in DatanodeProtocol#sendHeartbeat | Minor | datanode | Tao Li | Tao Li | | HDFS-16180 | FsVolumeImpl.nextBlock should consider that the block meta file has been deleted. | Minor | datanode | Max Xie | Max Xie | | HDFS-16175 | Improve the configurable value of Server #PURGEINTERVALNANOS | Major | ipc | JiangHua Zhu | JiangHua Zhu | | HDFS-16173 |" }, { "data": "CopyCommands#Put#executor queue configurability | Major | fs | JiangHua Zhu | JiangHua Zhu | | YARN-10891 | Extend QueueInfo with max-parallel-apps in CapacityScheduler | Major | capacity scheduler | Tamas Domok | Tamas Domok | | HADOOP-17544 | Mark KeyProvider as Stable | Major | security | Akira Ajisaka | Akira Ajisaka | | HDFS-15966 | Empty the statistical parameters when emptying the redundant queue | Minor | hdfs | zhanghuazong | zhanghuazong | | HDFS-16202 | Use constants HdfsClientConfigKeys.Failover.PREFIX instead of dfs.client.failover. | Minor | hdfs-client | Weison Wei | Weison Wei | | HDFS-16138 | BlockReportProcessingThread exit doesnt print the actual stack | Major | block placement | Renukaprasad C | Renukaprasad C | | HDFS-16204 | Improve FSDirEncryptionZoneOp related parameter comments | Major | documentation | JiangHua Zhu | JiangHua Zhu | | HDFS-16209 | Add description for dfs.namenode.caching.enabled | Major | documentation | Tao Li | Tao Li | | HADOOP-17897 | Allow nested blocks in switch case in checkstyle settings | Minor | build | Masatake Iwasaki | Masatake Iwasaki | | YARN-10693 | Add documentation for YARN-10623 auto refresh queue conf in CS | Major | capacity scheduler | Qi Zhu | Qi Zhu | | HADOOP-17857 | Check real user ACLs in addition to proxied user ACLs | Major | security | Eric Payne | Eric Payne | | HADOOP-17887 | Remove GzipOutputStream | Major | common | L. C. Hsieh | L. C. Hsieh | | HDFS-16065 | RBF: Add metrics to record Routers operations | Major | rbf | Janus Chow | Janus Chow | | HDFS-16188 | RBF: Router to support resolving monitored namenodes with DNS | Minor | rbf | Leon Gao | Leon Gao | | HDFS-16210 | RBF: Add the option of refreshCallQueue to RouterAdmin | Major | rbf | Janus Chow | Janus Chow | | HDFS-15160 | ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock | Major | datanode | Stephen ODonnell | Stephen ODonnell | | HDFS-16197 | Simplify getting NNStorage in FSNamesystem | Major | namenode | JiangHua Zhu | JiangHua Zhu | | YARN-10928 | Support default queue properties of capacity scheduler to simplify configuration management | Major | capacity scheduler | Weihao Zheng | Weihao Zheng | | HDFS-16221 | RBF: Add usage of refreshCallQueue for Router | Major | rbf | Janus Chow | Janus Chow | | HDFS-16223 | AvailableSpaceRackFaultTolerantBlockPlacementPolicy should use chooseRandomWithStorageTypeTwoTrial() for better performance. | Major | block placement | Ayush Saxena | Ayush Saxena | | HADOOP-17900 | Move ClusterStorageCapacityExceededException to Public from LimitedPrivate | Major | common, hdfs-client | Ayush Saxena | Ayush Saxena | | HDFS-15920 | Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured | Major | block placement | JiangHua Zhu | JiangHua Zhu | | HDFS-16225 | Fix typo for FederationTestUtils | Minor | rbf | Tao Li | Tao Li | | HADOOP-17913 | Filter deps with release labels | Blocker | build | Gautham Banasandra | Gautham Banasandra | | HADOOP-17914 | Print RPC response length in the exception message | Minor | ipc | Tao Li | Tao Li | | HDFS-16229 | Remove the use of obsolete BLOCKDELETIONINCREMENT | Trivial | documentation, namenode | JiangHua Zhu | JiangHua Zhu | | HADOOP-17893 | Improve PrometheusSink for Namenode TopMetrics | Major | metrics | Max Xie | Max Xie | | HADOOP-17926 | Maven-eclipse-plugin is no longer needed since Eclipse can import Maven projects by itself. | Minor | documentation | Rintaro Ikeda | Rintaro Ikeda | | HDFS-16063 | Add toString to EditLogFileInputStream | Minor | namanode | David Mollitor | Dionisii Iuzhakov | | YARN-10935 | AM Total Queue Limit goes below per-user AM Limit if parent is full. | Major | capacity scheduler, capacityscheduler | Eric Payne | Eric Payne | | HDFS-16232 | Fix java doc" }, { "data": "BlockReaderRemote#newBlockReader | Minor | documentation | Tao Li | Tao Li | | HADOOP-17939 | Support building on Apple Silicon | Major | build, common | Dongjoon Hyun | Dongjoon Hyun | | HDFS-16237 | Record the BPServiceActor information that communicates with Standby | Major | datanode | JiangHua Zhu | JiangHua Zhu | | HADOOP-17941 | Update xerces to 2.12.1 | Minor | build, common | Zhongwei Zhu | Zhongwei Zhu | | HADOOP-17905 | Modify Text.ensureCapacity() to efficiently max out the backing array size | Major | io | Peter Bacsko | Peter Bacsko | | HDFS-16246 | Print lockWarningThreshold in InstrumentedLock#logWarning and InstrumentedLock#logWaitWarning | Minor | common | Tao Li | Tao Li | | HDFS-16238 | Improve comments related to EncryptionZoneManager | Minor | documentation, encryption, namenode | JiangHua Zhu | JiangHua Zhu | | HDFS-16242 | JournalMetrics should add JournalId MetricTag to distinguish different nameservice journal metrics. | Minor | journal-node | Max Xie | Max Xie | | HDFS-16247 | RBF: Fix the ProcessingAvgTime and ProxyAvgTime code comments and document metrics describe ms unit | Major | rbf | Haiyang Hu | Haiyang Hu | | HDFS-16250 | Refactor AllowSnapshotMock using GMock | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16252 | Correct docs for dfs.http.client.retry.policy.spec | Major | documentation | Stephen ODonnell | Stephen ODonnell | | HDFS-16251 | Make hdfs_cat tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16263 | Add CMakeLists for hdfs_allowSnapshot | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16241 | Standby close reconstruction thread | Major | namanode | zhanghuazong | zhanghuazong | | HDFS-16264 | When adding block keys, the records come from the specific Block Pool | Minor | datanode | JiangHua Zhu | JiangHua Zhu | | HDFS-16260 | Make hdfs_deleteSnapshot tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16267 | Make hdfs_df tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16274 | Improve error msg for FSNamesystem#startFileInt | Minor | namanode | Tao Li | Tao Li | | HADOOP-17888 | The error of Constant annotation in AzureNativeFileSystemStore.java | Minor | fs/azure | guoxin | guoxin | | HDFS-16277 | Improve decision in AvailableSpaceBlockPlacementPolicy | Major | block placement | guophilipse | guophilipse | | HADOOP-17770 | WASB : Support disabling buffered reads in positional reads | Major | fs/azure | Anoop Sam John | Anoop Sam John | | HDFS-16282 | Duplicate generic usage information to hdfs debug command | Minor | tools | daimin | daimin | | YARN-1115 | Provide optional means for a scheduler to check real user ACLs | Major | capacity scheduler, scheduler | Eric Payne | Eric Payne | | HDFS-16279 | Print detail datanode info when process first storage report | Minor | datanode | Tao Li | Tao Li | | HDFS-16091 | WebHDFS should support getSnapshotDiffReportListing | Major | webhdfs | Masatake Iwasaki | Masatake Iwasaki | | HDFS-16290 | Make log more standardized when executing verifyAndSetNamespaceInfo() | Minor | datanode | JiangHua Zhu | JiangHua Zhu | | HDFS-16286 | Debug tool to verify the correctness of erasure coding on file | Minor | erasure-coding, tools | daimin | daimin | | HDFS-16266 | Add remote port information to HDFS audit log | Major | ipc, namanode | Tao Li | Tao Li | | HDFS-16291 | Make the comment" }, { "data": "INode#ReclaimContext more standardized | Minor | documentation, namenode | JiangHua Zhu | JiangHua Zhu | | HDFS-16294 | Remove invalid DataNode#CONFIGPROPERTYSIMULATED | Major | datanode | JiangHua Zhu | JiangHua Zhu | | HDFS-16296 | RBF: RouterRpcFairnessPolicyController add denied permits for each nameservice | Major | rbf | Janus Chow | Janus Chow | | HDFS-16273 | RBF: RouterRpcFairnessPolicyController add availableHandleOnPerNs metrics | Major | rbf | Xiangyi Zhu | Xiangyi Zhu | | HDFS-16302 | RBF: RouterRpcFairnessPolicyController record requests handled by each nameservice | Major | rbf | Janus Chow | Janus Chow | | HDFS-16307 | Improve HdfsBlockPlacementPolicies docs readability | Minor | documentation | guophilipse | guophilipse | | HDFS-16299 | Fix bug for TestDataNodeVolumeMetrics#verifyDataNodeVolumeMetrics | Minor | datanode, test | Tao Li | Tao Li | | HDFS-16301 | Improve BenchmarkThroughput#SIZE naming standardization | Minor | benchmarks, test | JiangHua Zhu | JiangHua Zhu | | HDFS-16305 | Record the remote NameNode address when the rolling log is triggered | Major | namenode | JiangHua Zhu | JiangHua Zhu | | HDFS-16287 | Support to make dfs.namenode.avoid.read.slow.datanode reconfigurable | Major | datanode | Haiyang Hu | Haiyang Hu | | YARN-10997 | Revisit allocation and reservation logging | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-16321 | Fix invalid config in TestAvailableSpaceRackFaultTolerantBPP | Minor | test | guophilipse | guophilipse | | YARN-11001 | Add docs on removing node label mapping from a node | Minor | documentation | Manu Zhang | Manu Zhang | | HDFS-16315 | Add metrics related to Transfer and NativeCopy for DataNode | Major | datanode, metrics | Tao Li | Tao Li | | HDFS-16310 | RBF: Add client port to CallerContext for Router | Major | rbf | Tao Li | Tao Li | | HDFS-16320 | Datanode retrieve slownode information from NameNode | Major | datanode | Janus Chow | Janus Chow | | HADOOP-17998 | Allow get command to run with multi threads. | Major | fs | Chengwei Wang | Chengwei Wang | | HDFS-16344 | Improve DirectoryScanner.Stats#toString | Major | . | Tao Li | Tao Li | | HADOOP-18023 | Allow cp command to run with multi threads. | Major | fs | Chengwei Wang | Chengwei Wang | | HADOOP-18029 | Update CompressionCodecFactory to handle uppercase file extensions | Minor | common, io, test | Desmond Sisson | Desmond Sisson | | HDFS-16358 | HttpFS implementation for getSnapshotDiffReportListing | Major | httpfs | Viraj Jasani | Viraj Jasani | | HDFS-16364 | Remove unnecessary brackets in NameNodeRpcServer#L453 | Trivial | namanode | Zhaohui Wang | Zhaohui Wang | | HDFS-16314 | Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable | Major | block placement | Haiyang Hu | Haiyang Hu | | HDFS-16338 | Fix error configuration message in FSImage | Minor | hdfs | guophilipse | guophilipse | | HDFS-16351 | Add path exception information in FSNamesystem | Minor | hdfs | guophilipse | guophilipse | | HDFS-16354 | Add description of GETSNAPSHOTDIFFLISTING to WebHDFS doc | Minor | documentation, webhdfs | Masatake Iwasaki | Masatake Iwasaki | | HDFS-16345 | Fix test cases fail in TestBlockStoragePolicy | Major | build | guophilipse | guophilipse | | HADOOP-18034 | Bump mina-core from 2.0.16 to 2.1.5 in /hadoop-project | Major | build | Ayush Saxena | Ayush Saxena | | HADOOP-18001 | Update to Jetty 9.4.44 | Major | build, common | Yuan Luo | Yuan Luo | | HADOOP-18040 | Use" }, { "data": "instead of ignoreTestFailure | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17643 | WASB : Make metadata checks case insensitive | Major | fs/azure | Anoop Sam John | Anoop Sam John | | HADOOP-17982 | OpensslCipher initialization error should log a WARN message | Trivial | kms, security | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-18042 | Fix jetty version in LICENSE-binary | Major | build, common | Yuan Luo | Yuan Luo | | HDFS-16327 | Make dfs.namenode.max.slowpeer.collect.nodes reconfigurable | Major | namenode | Tao Li | Tao Li | | HDFS-16378 | Add datanode address to BlockReportLeaseManager logs | Minor | datanode | Tao Li | Tao Li | | HDFS-16375 | The FBR lease ID should be exposed to the log | Major | datanode | Tao Li | Tao Li | | YARN-11048 | Add tests that shows how to delete config values with Mutation API | Minor | capacity scheduler, restapi | Szilard Nemeth | Szilard Nemeth | | HDFS-16352 | return the real datanode numBlocks in #getDatanodeStorageReport | Major | datanode | qinyuren | qinyuren | | YARN-11050 | Typo in method name: RMWebServiceProtocol#removeFromCluserNodeLabels | Trivial | resourcemanager, webservice | Szilard Nemeth | Szilard Nemeth | | HDFS-16386 | Reduce DataNode load when FsDatasetAsyncDiskService is working | Major | datanode | JiangHua Zhu | JiangHua Zhu | | HDFS-16391 | Avoid evaluation of LOG.debug statement in NameNodeHeartbeatService | Trivial | rbf | Zhaohui Wang | Zhaohui Wang | | YARN-8234 | Improve RM system metrics publishers performance by pushing events to timeline server in batch | Critical | resourcemanager, timelineserver | Hu Ziqian | Ashutosh Gupta | | HADOOP-18052 | Support Apple Silicon in start-build-env.sh | Major | build | Akira Ajisaka | Akira Ajisaka | | HDFS-16348 | Mark slownode as badnode to recover pipeline | Major | datanode | Janus Chow | Janus Chow | | HADOOP-18060 | RPCMetrics increases the number of handlers in processing | Major | rpc-server | JiangHua Zhu | JiangHua Zhu | | HDFS-16407 | Make hdfs_du tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HADOOP-18056 | DistCp: Filter duplicates in the source paths | Major | tools/distcp | Ayush Saxena | Ayush Saxena | | HDFS-16404 | Fix typo for CachingGetSpaceUsed | Minor | fs | Tao Li | Tao Li | | HADOOP-18044 | Hadoop - Upgrade to JQuery 3.6.0 | Major | build, common | Yuan Luo | Yuan Luo | | HDFS-16043 | Add markedDeleteBlockScrubberThread to delete blocks asynchronously | Major | hdfs, namanode | Xiangyi Zhu | Xiangyi Zhu | | HDFS-16426 | fix nextBlockReportTime when trigger full block report force | Major | datanode | qinyuren | qinyuren | | HDFS-16430 | Validate maximum blocks in EC group when adding an EC policy | Minor | ec, erasure-coding | daimin | daimin | | HDFS-16403 | Improve FUSE IO performance by supporting FUSE parameter max_background | Minor | fuse-dfs | daimin | daimin | | HDFS-16262 | Async refresh of cached locations in DFSInputStream | Major | hdfs-client | Bryan Beaudreault | Bryan Beaudreault | | HDFS-16401 | Remove the worthless DatasetVolumeChecker#numAsyncDatasetChecks | Major | datanode | JiangHua Zhu | JiangHua Zhu | | HADOOP-18093 | Better exception handling for testFileStatusOnMountLink() in" }, { "data": "| Trivial | test | Xing Lin | Xing Lin | | HDFS-16423 | balancer should not get blocks on stale storages | Major | balancer & mover | qinyuren | qinyuren | | HDFS-16444 | Show start time of JournalNode on Web | Major | journal-node | Tao Li | Tao Li | | YARN-10459 | containerLaunchedOnNode method not need to hold schedulerApptemt lock | Major | scheduler | Ryan Wu | Minni Mittal | | HDFS-16445 | Make HDFS count, mkdir, rm cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16447 | RBF: Registry HDFS Routers RPCServer & RPCClient metrics for PrometheusSink | Minor | rbf | Max Xie | Max Xie | | HADOOP-18110 | ViewFileSystem: Add Support for Localized Trash Root | Major | common | Xing Lin | Xing Lin | | HDFS-16440 | RBF: Support router get HAServiceStatus with Lifeline RPC address | Minor | rbf | YulongZ | YulongZ | | HADOOP-18117 | Add an option to preserve root directory permissions | Minor | tools | Mohanad Elsafty | Mohanad Elsafty | | HDFS-16459 | RBF: register RBFMetrics in MetricsSystem for promethuessink | Minor | rbf | Max Xie | Max Xie | | HDFS-16461 | Expose JournalNode storage info in the jmx metrics | Major | journal-node, metrics | Viraj Jasani | Viraj Jasani | | YARN-10580 | Fix some issues in TestRMWebServicesCapacitySchedDynamicConfig | Minor | resourcemanager, webservice | Szilard Nemeth | Tamas Domok | | HADOOP-18139 | Allow configuration of zookeeper server principal | Major | auth | Owen OMalley | Owen OMalley | | HDFS-16480 | Fix typo: indicies -> indices | Minor | block placement | Jiale Qi | Jiale Qi | | HADOOP-18128 | outputstream.md typo issue | Major | documentation | leo sun | leo sun | | YARN-11076 | Upgrade jQuery version in Yarn UI2 | Major | yarn-ui-v2 | Tamas Domok | Tamas Domok | | HDFS-16462 | Make HDFS get tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | YARN-11049 | MutableConfScheduler is referred as plain String instead of class name | Minor | resourcemanager | Szilard Nemeth | Szilard Nemeth | | HDFS-15382 | Split one FsDatasetImpl lock to volume grain locks. | Major | datanode | Mingxiang Li | Mingxiang Li | | HDFS-16495 | RBF should prepend the client ip rather than append it. | Major | rbf | Owen OMalley | Owen OMalley | | HADOOP-18144 | getTrashRoot/s in ViewFileSystem should return viewFS path, not targetFS path | Major | common | Xing Lin | Xing Lin | | HDFS-16494 | Removed reuse" }, { "data": "AvailableSpaceVolumeChoosingPolicy#initLocks() | Major | datanode | JiangHua Zhu | JiangHua Zhu | | HDFS-16470 | Make HDFS find tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16504 | Add parameter for NameNode to process getBloks request | Minor | balancer & mover, namanode | Max Xie | Max Xie | | YARN-11086 | Add space in debug log of ParentQueue | Minor | capacity scheduler | Junfan Zhang | Junfan Zhang | | HDFS-16471 | Make HDFS ls tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | YARN-10547 | Decouple job parsing logic from SLSRunner | Minor | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-10552 | Eliminate code duplication in SLSCapacityScheduler and SLSFairScheduler | Minor | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-11094 | Follow up changes for YARN-10547 | Minor | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | HDFS-16434 | Add opname to read/write lock for remaining operations | Major | block placement | Tao Li | Tao Li | | YARN-10548 | Decouple AM runner logic from SLSRunner | Minor | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-11052 | Improve code quality in TestRMWebServicesNodeLabels | Minor | test | Szilard Nemeth | Szilard Nemeth | | YARN-10549 | Decouple RM runner logic from SLSRunner | Minor | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-10550 | Decouple NM runner logic from SLSRunner | Minor | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-11088 | Introduce the config to control the AM allocated to non-exclusive nodes | Major | capacity scheduler | Junfan Zhang | Junfan Zhang | | YARN-11103 | SLS cleanup after previously merged SLS refactor jiras | Minor | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | HDFS-16472 | Make HDFS setrep tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16527 | Add global timeout rule for TestRouterDistCpProcedure | Minor | test | Tao Li | Tao Li | | HDFS-16529 | Remove unnecessary setObserverRead in TestConsistentReadsObserver | Trivial | test | Zhaohui Wang | Zhaohui Wang | | HDFS-16530 | setReplication debug log creates a new string even if debug is disabled | Major | namenode | Stephen ODonnell | Stephen ODonnell | | HADOOP-18188 | Support touch command for directory | Major | common | Akira Ajisaka | Viraj Jasani | | HDFS-16457 | Make fs.getspaceused.classname reconfigurable | Major | namenode | yanbin.zhang | yanbin.zhang | | HDFS-16427 | Add debug log for BlockManager#chooseExcessRedundancyStriped | Minor | erasure-coding | Tao Li | Tao Li | | HDFS-16497 | EC: Add param comment for liveBusyBlockIndices with HDFS-14768 | Minor | erasure-coding, namanode | caozhiqiang | caozhiqiang | | HDFS-16473 | Make HDFS stat tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16516 | fix filesystemshell wrong params | Minor | documentation | guophilipse | guophilipse | | HDFS-16474 | Make HDFS tail tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HDFS-16389 | Improve NNThroughputBenchmark test mkdirs | Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu | | MAPREDUCE-7373 | Building MapReduce NativeTask fails on Fedora 34+ | Major | build, nativetask | Kengo Seki | Kengo Seki | | HDFS-16355 | Improve the description of dfs.block.scanner.volume.bytes.per.second | Minor | documentation, hdfs | guophilipse | guophilipse | | HADOOP-18155 | Refactor tests in TestFileUtil | Trivial | common | Gautham Banasandra | Gautham Banasandra | | HADOOP-18088 | Replace log4j 1.x with reload4j | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-16501 | Print the exception when reporting a bad block | Major | datanode | qinyuren | qinyuren | | HDFS-16500 | Make asynchronous blocks deletion lock and unlock durtion threshold configurable | Major | namanode | Chengwei Wang | Chengwei Wang | | HADOOP-17551 | Upgrade maven-site-plugin to 3.11.0 | Major | build, common | Akira Ajisaka | Ashutosh Gupta | | HADOOP-18214 | Update BUILDING.txt | Minor | build, documentation | Steve Loughran | Steve Loughran | | HDFS-16519 | Add throttler to EC reconstruction | Minor | datanode, ec | daimin | daimin | | HADOOP-16202 | Enhance openFile() for better read performance against object stores | Major | fs, fs/s3, tools/distcp | Steve Loughran | Steve Loughran | | HDFS-16554 | Remove unused configuration dfs.namenode.block.deletion.increment. | Major | namenode | Chengwei Wang | Chengwei Wang | | HDFS-16539 | RBF: Support refreshing/changing router fairness policy controller without rebooting router | Minor | rbf | Felix N | Felix N | | HDFS-16553 | Fix checkstyle for the length of BlockManager construction method over" }, { "data": "| Major | namenode | Chengwei Wang | Chengwei Wang | | YARN-11116 | Migrate Times util from SimpleDateFormat to thread-safe DateTimeFormatter class | Minor | utils, yarn-common | Jonathan Turner Eagles | Jonathan Turner Eagles | | HDFS-16562 | Upgrade moment.min.js to 2.29.2 | Major | build, common | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | HDFS-16468 | Define ssize_t for Windows | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16520 | Improve EC pread: avoid potential reading whole block | Major | dfsclient, ec, erasure-coding | daimin | daimin | | MAPREDUCE-7379 | RMContainerRequestor#makeRemoteRequest has confusing log message | Trivial | mrv2 | Szilard Nemeth | Ashutosh Gupta | | HADOOP-18167 | Add metrics to track delegation token secret manager operations | Major | metrics, security | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | YARN-11114 | RMWebServices returns only apps matching exactly the submitted queue name | Major | capacity scheduler, webapp | Szilard Nemeth | Szilard Nemeth | | HADOOP-18193 | Support nested mount points in INodeTree | Major | viewfs | Lei Yang | Lei Yang | | HDFS-16465 | Remove redundant strings.h inclusions | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-10080 | Support show app id on localizer thread pool | Major | nodemanager | zhoukang | Ashutosh Gupta | | HDFS-16584 | Record StandbyNameNode information when Balancer is running | Major | balancer & mover, namenode | JiangHua Zhu | JiangHua Zhu | | HADOOP-18172 | Change scope of getRootFallbackLink for InodeTree to make them accessible from outside package | Minor | fs | Xing Lin | Xing Lin | | HADOOP-18249 | Fix getUri() in HttpRequest has been deprecated | Major | common | Shilun Fan | Shilun Fan | | HADOOP-18240 | Upgrade Yetus to 0.14.0 | Major | build | Akira Ajisaka | Ashutosh Gupta | | HDFS-16585 | Add @VisibleForTesting in Dispatcher.java after HDFS-16268 | Trivial | balancer | Wei-Chiu Chuang | Ashutosh Gupta | | HDFS-16599 | Fix typo in hadoop-hdfs-rbf module | Minor | rbf | Shilun Fan | Shilun Fan | | YARN-11142 | Remove unused Imports in Hadoop YARN project | Minor | yarn | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16603 | Improve DatanodeHttpServer With Netty recommended method | Minor | datanode | Shilun Fan | Shilun Fan | | HDFS-16610 | Make fsck read timeout configurable | Major | hdfs-client | Stephen ODonnell | Stephen ODonnell | | HDFS-16576 | Remove unused imports in HDFS project | Minor | hdfs | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16621 | Remove unused JNStorage#getCurrentDir() | Minor | journal-node, qjm | JiangHua Zhu | JiangHua Zhu | | HDFS-16463 | Make dirent cross platform compatible | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16627 |" }, { "data": "BPServiceActor#register log to add NameNode address | Minor | hdfs | Shilun Fan | Shilun Fan | | HDFS-16609 | Fix Flakes Junit Tests that often report timeouts | Major | test | Shilun Fan | Shilun Fan | | YARN-11175 | Refactor LogAggregationFileControllerFactory | Minor | log-aggregation | Szilard Nemeth | Szilard Nemeth | | YARN-11176 | Refactor TestAggregatedLogDeletionService | Minor | log-aggregation | Szilard Nemeth | Szilard Nemeth | | HDFS-16469 | Locate protoc-gen-hrpc across platforms | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16613 | EC: Improve performance of decommissioning dn with many ec blocks | Major | ec, erasure-coding, namenode | caozhiqiang | caozhiqiang | | HDFS-16629 | [JDK 11] Fix javadoc warnings in hadoop-hdfs module | Minor | hdfs | Shilun Fan | Shilun Fan | | YARN-11172 | Fix testDelegationToken | Major | test | Chenyu Zheng | Chenyu Zheng | | YARN-11182 | Refactor TestAggregatedLogDeletionService: 2nd phase | Minor | log-aggregation | Szilard Nemeth | Szilard Nemeth | | HADOOP-18271 | Remove unused Imports in Hadoop Common project | Minor | common | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18288 | Total requests and total requests per sec served by RPC servers | Major | rpc-server | Viraj Jasani | Viraj Jasani | | HADOOP-18314 | Add some description for PowerShellFencer | Major | ha | JiangHua Zhu | JiangHua Zhu | | HADOOP-18284 | Remove Unnecessary semicolon ; | Minor | common | Shilun Fan | Shilun Fan | | YARN-11202 | Optimize ClientRMService.getApplications | Major | yarn | Tamas Domok | Tamas Domok | | HDFS-16647 | Delete unused NameNode#FSHDFSIMPL_KEY | Minor | namenode | JiangHua Zhu | JiangHua Zhu | | HDFS-16638 | Add isDebugEnabled check for debug blockLogs in BlockManager | Trivial | namenode | dzcxzl | dzcxzl | | HADOOP-18297 | Upgrade dependency-check-maven to 7.1.1 | Minor | security | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16466 | Implement Linux permission flags on Windows | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | MAPREDUCE-7201 | Make Job History File Permissions configurable | Major | jobhistoryserver | Prabhu Joseph | Ashutosh Gupta | | HADOOP-18294 | Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums | Minor | common | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18336 | tag FSDataInputStream.getWrappedStream() @Public/@Stable | Minor | fs | Steve Loughran | Ashutosh Gupta | | HADOOP-13144 | Enhancing IPC client throughput via multiple connections per user | Minor | ipc | Jason Kace | igo Goiri | | HADOOP-18332 | Remove rs-api dependency by downgrading jackson to 2.12.7 | Major | build | PJ Fanning | PJ Fanning | | HDFS-16666 | Pass CMake args for Windows in pom.xml | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16464 | Create only libhdfspp static libraries for Windows | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16640 | RBF: Show datanode IP list when click DN histogram in Router | Minor | rbf | Zhaohui Wang | Zhaohui Wang | | HDFS-16605 | Improve Code With Lambda in hadoop-hdfs-rbf module | Minor | rbf | Shilun Fan | Shilun Fan | | HDFS-16467 | Ensure Protobuf generated headers are included first | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16655 | OIV: print out erasure coding policy name in oiv Delimited output | Minor | erasure-coding | Max Xie | Max Xie | | HDFS-16660 | Improve Code With Lambda in IPCLoggerChannel class | Minor | journal-node | ZanderXu | ZanderXu | | HDFS-16619 | Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import. | Major | hdfs | Shilun Fan | Shilun Fan | | HDFS-16658 | BlockManager should output some logs when logEveryBlock is true. | Minor | block placement | ZanderXu | ZanderXu | | HDFS-16671 | RBF: RouterRpcFairnessPolicyController supports configurable permit acquire timeout | Major | rbf | ZanderXu | ZanderXu | | YARN-11063 | Support auto queue creation template wildcards for arbitrary queue depths | Major | capacity scheduler | Andras Gyori | Bence Kosztolnik | | HADOOP-18358 | Update commons-math3 from 3.1.1 to 3.6.1. | Minor | build, common | Shilun Fan | Shilun Fan | | HADOOP-18301 | Upgrade commons-io to" }, { "data": "| Minor | build, common | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16712 | Fix incorrect placeholder in DataNode.java | Major | datanode | ZanderXu | ZanderXu | | YARN-11029 | Refactor AMRMProxy Service code and Added Some Metrics | Major | amrmproxy | Minni Mittal | Shilun Fan | | HDFS-16642 | [SBN read] Moving selecting inputstream from JN in EditlogTailer out of FSNLock | Major | ha, namanode | ZanderXu | ZanderXu | | HDFS-16648 | Normalize the usage of debug logs in NameNode | Minor | namanode | ZanderXu | ZanderXu | | HDFS-16709 | Remove redundant cast in FSEditLogOp.class | Major | namanode | ZanderXu | ZanderXu | | MAPREDUCE-7385 | impove JobEndNotifier#httpNotification With recommended methods | Minor | mrv1 | Shilun Fan | Shilun Fan | | HDFS-16702 | MiniDFSCluster should report cause of exception in assertion error | Minor | hdfs | Steve Vaughan | Steve Vaughan | | HDFS-16723 | Replace incorrect SafeModeException with StandbyException in RouterRpcServer.class | Major | rbf | ZanderXu | ZanderXu | | HDFS-16678 | RBF supports disable getNodeUsage() in RBFMetrics | Major | rbf | ZanderXu | ZanderXu | | HDFS-16704 | Datanode return empty response instead of NPE for GetVolumeInfo during restarting | Major | datanode | ZanderXu | ZanderXu | | YARN-10885 | Make FederationStateStoreFacade#getApplicationHomeSubCluster use JCache | Major | federation | chaosju | Shilun Fan | | HDFS-16705 | RBF: Support healthMonitor timeout configurable and cache NN and client proxy in NamenodeHeartbeatService | Major | rbf | ZanderXu | ZanderXu | | HADOOP-18365 | Updated addresses are still accessed using the old IP address | Major | common | Steve Vaughan | Steve Vaughan | | HDFS-16717 | Replace NPE with IOException in DataNode.class | Major | datanode | ZanderXu | ZanderXu | | HDFS-16687 | RouterFsckServlet replicates code from DfsServlet base class | Major | federation | Steve Vaughan | Steve Vaughan | | HADOOP-18333 | hadoop-client-runtime impact by CVE-2022-2047 CVE-2022-2048 due to shaded jetty | Major | build | phoebe chen | Ashutosh Gupta | | HADOOP-18361 | Update commons-net from 3.6 to 3.8.0. | Minor | common | Shilun Fan | Shilun Fan | | HADOOP-18406 | Adds alignment context to call path for creating RPC proxy with multiple connections per user. | Major | ipc | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | YARN-11253 | Add Configuration to delegationToken RemoverScanInterval | Major | resourcemanager | Shilun Fan | Shilun Fan | | HDFS-16684 | Exclude self from JournalNodeSyncer when using a bind host | Major | journal-node | Steve Vaughan | Steve Vaughan | | HDFS-16735 | Reduce the number of HeartbeatManager loops | Major | datanode, namanode | Shuyan Zhang | Shuyan Zhang | | YARN-11196 | NUMA Awareness support in DefaultContainerExecutor | Major | nodemanager | Prabhu Joseph | Samrat Deb | | MAPREDUCE-7409 | Make shuffle key length configurable | Major | mrv2 | Andrs Gyri | Ashutosh Gupta | | HADOOP-18441 | Remove org.apache.hadoop.maven.plugin.shade.resource.ServicesResourceTransformer | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18388 | Allow dynamic groupSearchFilter in LdapGroupsMapping | Major | security | Ayush Saxena | Ayush Saxena | | HADOOP-18427 | Improve ZKDelegationTokenSecretManager#startThead With recommended methods. | Minor | common | Shilun Fan | Shilun Fan | | YARN-11278 | Ambiguous error message in mutation API | Major | capacity scheduler | Andrs Gyri | Ashutosh Gupta | | YARN-11274 | Improve Nodemanager#NodeStatusUpdaterImpl Log | Minor | nodemanager | Shilun Fan | Shilun Fan | | YARN-11286 |" }, { "data": "AsyncDispatcher#printEventDetailsExecutor thread pool parameter configurable | Minor | resourcemanager | Shilun Fan | Shilun Fan | | HDFS-16663 | EC: Allow block reconstruction pending timeout refreshable to increase decommission performance | Major | ec, namenode | caozhiqiang | caozhiqiang | | HDFS-16770 | [Documentation] RBF: Duplicate statement to be removed for better readabilty | Minor | documentation, rbf | Renukaprasad C | Renukaprasad C | | HDFS-16686 | GetJournalEditServlet fails to authorize valid Kerberos request | Major | journal-node | Steve Vaughan | Steve Vaughan | | HADOOP-15072 | Upgrade Apache Kerby version to 2.0.x | Major | security | Jiajia Li | Colm O hEigeartaigh | | HADOOP-18118 | Fix KMS Accept Queue Size default value to 500 | Minor | common | guophilipse | guophilipse | | MAPREDUCE-7407 | Avoid stopContainer() on dead node | Major | mrv2 | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18446 | Add a re-queue metric to RpcMetrics.java to quantify the number of re-queue RPCs | Minor | metrics | ZanderXu | ZanderXu | | YARN-11303 | Upgrade jquery ui to 1.13.2 | Major | security | D M Murali Krishna Reddy | Ashutosh Gupta | | HDFS-16341 | Fix BlockPlacementPolicy details in hdfs defaults | Minor | documentation | guophilipse | guophilipse | | HADOOP-18451 | Update hsqldb.version from 2.3.4 to 2.5.2 | Major | common | Shilun Fan | Shilun Fan | | HADOOP-16769 | LocalDirAllocator to provide diagnostics when file creation fails | Minor | util | Ramesh Kumar Thangarajan | Ashutosh Gupta | | HADOOP-18341 | upgrade commons-configuration2 to 2.8.0 and commons-text to 1.9 | Major | common | PJ Fanning | PJ Fanning | | HDFS-16776 | Erasure Coding: The length of targets should be checked when DN gets a reconstruction task | Major | erasure-coding | Ruinan Gu | Ruinan Gu | | HDFS-16771 | JN should tersely print logs about NewerTxnIdException | Major | journal-node | ZanderXu | ZanderXu | | HADOOP-18466 | Limit the findbugs suppression IS2INCONSISTENTSYNC to S3AFileSystem field | Minor | fs/s3 | Viraj Jasani | Viraj Jasani | | YARN-11306 | Refactor NM#FederationInterceptor#recover Code | Major | federation, nodemanager | Shilun Fan | Shilun Fan | | YARN-11290 | Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster | Minor | federation | Shilun Fan | Shilun Fan | | YARN-11240 | Fix incorrect placeholder in yarn-module | Minor | yarn | Shilun Fan | Shilun Fan | | YARN-6169 | container-executor message on empty configuration file can be improved | Trivial | container-executor | Miklos Szegedi | Riya Khandelwal | | YARN-11187 | Remove WhiteBox in yarn module. | Minor | test | Shilun Fan | Shilun Fan | | MAPREDUCE-7370 |" }, { "data": "MultipleOutputs#close call | Major | mapreduce-client | Prabhu Joseph | Ashutosh Gupta | | HADOOP-18469 | Add XMLUtils methods to centralise code that creates secure XML parsers | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18442 | Remove the hadoop-openstack module | Major | build, fs, fs/swift | Steve Loughran | Steve Loughran | | HADOOP-18468 | upgrade jettison json jar due to fix CVE-2022-40149 | Major | build | PJ Fanning | PJ Fanning | | YARN-6766 | Add helper method in FairSchedulerAppsBlock to print app info | Minor | webapp | Daniel Templeton | Riya Khandelwal | | HDFS-16774 | Improve async delete replica on datanode | Major | datanode | Haiyang Hu | Haiyang Hu | | HADOOP-17779 | Lock File System Creator Semaphore Uninterruptibly | Minor | fs | David Mollitor | David Mollitor | | HADOOP-18483 | Exclude Dockerfilewindows10 from hadolint | Major | common | Gautham Banasandra | Gautham Banasandra | | HADOOP-18133 | Add Dockerfile for Windows 10 | Critical | build | Gautham Banasandra | Gautham Banasandra | | HADOOP-18360 | Update commons-csv from 1.0 to 1.9.0. | Minor | common | Shilun Fan | Shilun Fan | | HADOOP-18493 | update jackson-databind 2.12.7.1 due to CVE fixes | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18462 | InstrumentedWriteLock should consider Reentrant case | Major | common | ZanderXu | ZanderXu | | HDFS-6874 | Add GETFILEBLOCKLOCATIONS operation to HttpFS | Major | httpfs | Gao Zhong Liang | Ashutosh Gupta | | HADOOP-17563 | Update Bouncy Castle to 1.68 or later | Major | build | Takanobu Asanuma | PJ Fanning | | HADOOP-18497 | Upgrade commons-text version to fix CVE-2022-42889 | Major | build | Xiaoqiao He | PJ Fanning | | YARN-11328 | Refactoring part of the code of SQLFederationStateStore | Major | federation, router | Shilun Fan | Shilun Fan | | HDFS-16803 | Improve some annotations in hdfs module | Major | documentation, namenode | JiangHua Zhu | JiangHua Zhu | | HDFS-16795 | Use secure XML parser utils in hdfs classes | Major | hdfs | PJ Fanning | PJ Fanning | | HADOOP-18500 | Upgrade maven-shade-plugin to 3.3.0 | Minor | build | Willi Raschkowski | Willi Raschkowski | | HADOOP-18506 | Update build instructions for Windows using VS2019 | Major | build, documentation | Gautham Banasandra | Gautham Banasandra | | YARN-11330 | Use secure XML parser utils in YARN | Major | yarn | PJ Fanning | PJ Fanning | | MAPREDUCE-7411 | Use secure XML parser utils in MapReduce | Major | mrv1, mrv2 | PJ Fanning | PJ Fanning | | YARN-11356 | Upgrade DataTables to 1.11.5 to fix CVEs | Major | yarn | Bence Kosztolnik | Bence Kosztolnik | | HDFS-16817 | Remove useless DataNode lock related configuration | Major | datanode | Haiyang Hu | Haiyang Hu | | HDFS-16802 | Print options when accessing ClientProtocol#rename2() | Minor | namenode | JiangHua Zhu | JiangHua Zhu | | YARN-11360 | Add number of decommissioning/shutdown nodes to YARN cluster metrics. | Major | client, resourcemanager | Chris Nauroth | Chris Nauroth | | HADOOP-18472 | Upgrade to snakeyaml 1.33 | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18512 | upgrade woodstox-core to 5.4.0 for security fix | Major | common | phoebe chen | PJ Fanning | | YARN-11363 | Remove unused TimelineVersionWatcher and TimelineVersion from hadoop-yarn-server-tests | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | YARN-11364 | Docker Container to accept docker Image name with sha256 digest | Major | yarn | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16811 | Support DecommissionBackoffMonitor parameters reconfigurable | Major | datanode, namanode | Haiyang Hu | Haiyang Hu | | HADOOP-18517 | ABFS: Add fs.azure.enable.readahead option to disable readahead | Major | fs/azure | Steve Loughran | Steve Loughran | | HADOOP-18502 | Hadoop metrics should return 0 when there is no change | Major | metrics | leo sun | leo sun | | HADOOP-18433 | Fix main thread name. | Major | common, ipc | Chenyu Zheng | Chenyu Zheng | | YARN-10005 | Code improvements in MutableCSConfigurationProvider | Minor | capacity scheduler | Szilard Nemeth | Peter Szucs | | MAPREDUCE-7390 | Remove WhiteBox in mapreduce module. | Minor | mrv2 | Shilun Fan | Shilun Fan | | MAPREDUCE-5608 | Replace and deprecate mapred.tasktracker.indexcache.mb | Major | mapreduce-client | Sandy Ryza | Ashutosh Gupta | | HADOOP-18484 | upgrade hsqldb to" }, { "data": "due to CVE | Major | common | PJ Fanning | Ashutosh Gupta | | YARN-11369 | Commons.compress throws an IllegalArgumentException with large uids after 1.21 | Major | client | Benjamin Teke | Benjamin Teke | | HDFS-16844 | [RBF] The routers should be resiliant against exceptions from StateStore | Major | rbf | Owen OMalley | Owen OMalley | | HDFS-16813 | Remove parameter validation logic such as dfs.namenode.decommission.blocks.per.interval in DatanodeAdminManager#activate | Major | datanode | Haiyang Hu | Haiyang Hu | | HDFS-16841 | Enhance the function of DebugAdmin#VerifyECCommand | Major | erasure-coding | Haiyang Hu | Haiyang Hu | | HDFS-16840 | Enhance the usage description about oiv in HDFSCommands.md and OfflineImageViewerPB | Major | hdfs | Haiyang Hu | Haiyang Hu | | HDFS-16779 | Add ErasureCodingPolicy information to the response description for GETFILESTATUS in WebHDFS.md | Major | webhdfs | ZanderXu | ZanderXu | | YARN-11381 | Fix hadoop-yarn-common module Java Doc Errors | Major | yarn | Shilun Fan | Shilun Fan | | YARN-11380 | Fix hadoop-yarn-api module Java Doc Errors | Major | yarn | Shilun Fan | Shilun Fan | | HDFS-16846 | EC: Only EC blocks should be effected by max-streams-hard-limit configuration | Major | erasure-coding | caozhiqiang | caozhiqiang | | HDFS-16851 | RBF: Add a utility to dump the StateStore | Major | rbf | Owen OMalley | Owen OMalley | | HDFS-16839 | It should consider EC reconstruction work when we determine if a node is busy | Major | ec, erasure-coding | Ruinan Gu | Ruinan Gu | | HDFS-16860 | Upgrade moment.min.js to 2.29.4 | Major | build, ui | D M Murali Krishna Reddy | Anurag Parvatikar | | YARN-11385 | Fix hadoop-yarn-server-common module Java Doc Errors | Minor | yarn | Shilun Fan | Shilun Fan | | HADOOP-18573 | Improve error reporting on non-standard kerberos names | Blocker | security | Steve Loughran | Steve Loughran | | HADOOP-18561 | CVE-2021-37533 on commons-net is included in hadoop common and hadoop-client-runtime | Blocker | build | phoebe chen | Steve Loughran | | HDFS-16873 | FileStatus compareTo does not specify ordering | Trivial | documentation | DDillon | DDillon | | HADOOP-18538 | Upgrade kafka to 2.8.2 | Major | build | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | HDFS-16652 | Upgrade jquery datatable version references to v1.10.19 | Major | ui | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | YARN-11393 | Fs2cs could be extended to set ULF to -1 upon conversion | Major | yarn | Susheel Gupta | Susheel Gupta | | HDFS-16879 | EC : Fsck -blockId shows number of redundant internal block replicas for EC Blocks | Major | erasure-coding | Haiyang Hu | Haiyang Hu | | HDFS-16883 | Duplicate field name in hdfs-default.xml | Minor | documentation | YUBI LEE | YUBI LEE | | HDFS-16887 | Log start and end of phase/step in startup progress | Minor | namenode | Viraj Jasani | Viraj Jasani | | YARN-11409 | Fix Typo" }, { "data": "ResourceManager#webapp module | Minor | resourcemanager | Shilun Fan | Shilun Fan | | HADOOP-18595 | Fix the the and friends typos | Minor | common | Shilun Fan | Nikita Eshkeev | | HDFS-16891 | Avoid the overhead of copy-on-write exception list while loading inodes sub sections in parallel | Major | namenode | Viraj Jasani | Viraj Jasani | | HDFS-16893 | Standardize the usage of DFSClient debug log | Minor | dfsclient | Hualong Zhang | Hualong Zhang | | HADOOP-18604 | Add compile platform in the hadoop version output | Major | build, common | Ayush Saxena | Ayush Saxena | | HDFS-16888 | BlockManager#maxReplicationStreams, replicationStreamsHardLimit, blocksReplWorkMultiplier and PendingReconstructionBlocks#timeout should be volatile | Major | block placement | Haiyang Hu | Haiyang Hu | | HADOOP-18592 | Sasl connection failure should log remote address | Major | ipc | Viraj Jasani | Viraj Jasani | | YARN-11419 | Remove redundant exception capture in NMClientAsyncImpl and improve readability in ContainerShellWebSocket, etc | Minor | client | jingxiong zhong | jingxiong zhong | | HDFS-16848 | RBF: Improve StateStoreZookeeperImpl | Major | rbf | Sun Hao | Sun Hao | | HDFS-16903 | Fix javadoc of Class LightWeightResizableGSet | Trivial | datanode, hdfs | farmmamba | farmmamba | | HDFS-16898 | Remove write lock for processCommandFromActor of DataNode to reduce impact on heartbeat | Major | datanode | farmmamba | farmmamba | | HADOOP-18625 | Fix method name of RPC.Builder#setnumReaders | Minor | ipc | Haiyang Hu | Haiyang Hu | | MAPREDUCE-7431 | ShuffleHandler is not working correctly in SSL mode after the Netty 4 upgrade | Major | mrv2 | Tamas Domok | Tamas Domok | | HDFS-16882 | RBF: Add cache hit rate metric in MountTableResolver#getDestinationForPath | Minor | rbf | farmmamba | farmmamba | | HDFS-16907 | Add LastHeartbeatResponseTime for BP service actor | Major | datanode | Viraj Jasani | Viraj Jasani | | HADOOP-18628 | Server connection should log host name before returning VersionMismatch error | Minor | ipc | Viraj Jasani | Viraj Jasani | | YARN-11323 | [Federation] Improve Router Handler FinishApps | Major | federation, router, yarn | Shilun Fan | Shilun Fan | | YARN-11333 | Federation: Improve Yarn Router Web Page | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11425 | [Federation] Router Supports SubClusterCleaner | Major | federation | Shilun Fan | Shilun Fan | | HDFS-16914 | Add some logs for updateBlockForPipeline RPC. | Minor | namanode | farmmamba | farmmamba | | HADOOP-18215 | Enhance WritableName to be able to return aliases for classes that use serializers | Minor | common | Bryan Beaudreault | Bryan Beaudreault | | YARN-11439 | Fix Typo of hadoop-yarn-ui README.md | Minor | yarn-ui-v2 | Shilun Fan | Shilun Fan | | HDFS-16916 | Improve the use of JUnit Test in DFSClient | Minor | dfsclient | Hualong Zhang | Hualong Zhang | | HADOOP-18622 | Upgrade ant to 1.10.13 | Major | common | Aleksandr Nikolaev | Aleksandr Nikolaev | | YARN-11394 | Fix hadoop-yarn-server-resourcemanager module Java Doc Errors. | Major | resourcemanager | Shilun Fan | Shilun Fan | | HADOOP-18535 | Implement token storage solution based on MySQL | Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | YARN-11370 | Refactor MemoryFederationStateStore code. | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18645 | Provide keytab file key name with ServiceStateException | Minor | common | Viraj Jasani | Viraj Jasani | | HADOOP-18646 | Upgrade Netty to" }, { "data": "| Major | build | Aleksandr Nikolaev | Aleksandr Nikolaev | | HADOOP-18661 | Fix bin/hadoop usage script terminology | Blocker | scripts | Steve Loughran | Steve Loughran | | HDFS-16947 | RBF NamenodeHeartbeatService to report error for not being able to register namenode in state store | Major | rbf | Viraj Jasani | Viraj Jasani | | HDFS-16953 | RBF: Mount table store APIs should update cache only if state store record is successfully updated | Major | rbf | Viraj Jasani | Viraj Jasani | | HADOOP-18644 | Add bswap support for LoongArch | Major | native | zhaixiaojuan | zhaixiaojuan | | HDFS-16948 | Update log of BlockManager#chooseExcessRedundancyStriped when EC internal block is moved by balancer | Major | erasure-coding | Ruinan Gu | Ruinan Gu | | HDFS-16964 | Improve processing of excess redundancy after failover | Major | block placement | Shuyan Zhang | Shuyan Zhang | | YARN-11426 | Improve YARN NodeLabel Memory Display | Major | resourcemanager | Shilun Fan | Shilun Fan | | HADOOP-18458 | AliyunOSS: AliyunOSSBlockOutputStream to support heap/off-heap buffer before uploading data to OSS | Major | fs/oss | wujinhu | wujinhu | | HDFS-16959 | RBF: State store cache loading metrics | Major | rbf | Viraj Jasani | Viraj Jasani | | YARN-10146 | [Federation] Add missing REST APIs for Router | Major | federation | Bilwa S T | Shilun Fan | | HDFS-16967 | RBF: File based state stores should allow concurrent access to the records | Major | rbf | Viraj Jasani | Viraj Jasani | | HADOOP-18684 | S3A filesystem to support binding to other URI schemes | Major | common | Harshit Gupta | Harshit Gupta | | YARN-11436 | [Federation] MemoryFederationStateStore Support Version. | Major | federation | Shilun Fan | Shilun Fan | | HDFS-16973 | RBF: MountTableResolver cache size lookup should take read lock | Major | rbf | Viraj Jasani | Viraj Jasani | | HADOOP-18687 | Remove unnecessary dependency on json-smart | Major | auth | Michiel de Jong | Michiel de Jong | | HDFS-16952 | Support getLinkTarget API in WebHDFS | Minor | webhdfs | Hualong Zhang | Hualong Zhang | | HDFS-16971 | Add read time metrics for remote reads in Statistics | Minor | hdfs | Melissa You | Melissa You | | HDFS-16974 | Consider volumes average load of each DataNode when choosing target. | Major | datanode | Shuyan Zhang | Shuyan Zhang | | HADOOP-18590 | Publish SBOM artifacts | Major | build | Dongjoon Hyun | Dongjoon Hyun | | YARN-11465 | Improved YarnClient Log Format | Minor | client | Lu Yuan | Lu Yuan | | YARN-11438 | [Federation] ZookeeperFederationStateStore Support Version. | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18597 | Simplify single node instructions for creating directories for Map Reduce | Trivial | documentation | Nikita Eshkeev | Nikita Eshkeev | | HADOOP-18691 | Add a CallerContext getter on the Schedulable interface | Major | common | Christos Bisias | Christos Bisias | | YARN-11463 | Node Labels root directory creation doesnt have a retry logic | Major | capacity scheduler | Benjamin Teke | Ashutosh Gupta | | HADOOP-18710 | Add RPC metrics for response time | Minor | metrics | liuguanghua | liuguanghua | | HADOOP-18689 | Bump jettison from 1.5.3 to 1.5.4 in /hadoop-project | Major | common | Ayush Saxena | Ayush Saxena | | HDFS-16988 | Improve NameServices info at JournalNode web UI | Minor | journal-node, ui | Zhaohui Wang | Zhaohui Wang | | HDFS-16981 | Support getFileLinkStatus API in WebHDFS | Major | webhdfs | Hualong Zhang | Hualong Zhang | | YARN-11437 | [Federation] SQLFederationStateStore Support" }, { "data": "| Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18637 | S3A to support upload of files greater than 2 GB using DiskBlocks | Major | fs/s3 | Harshit Gupta | Harshit Gupta | | YARN-11474 | The yarn queue list is displayed on the CLI | Minor | client | Lu Yuan | Lu Yuan | | HDFS-16995 | Remove unused parameters at NameNodeHttpServer#initWebHdfs | Minor | webhdfs | Zhaohui Wang | Zhaohui Wang | | HDFS-16707 | RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus | Minor | rbf | Jiale Qi | Jiale Qi | | HADOOP-18725 | Avoid cross-platform build for irrelevant Dockerfile changes | Major | build | Gautham Banasandra | Gautham Banasandra | | YARN-11462 | Fix Typo of hadoop-yarn-common | Minor | yarn | Shilun Fan | Shilun Fan | | YARN-11450 | Improvements for TestYarnConfigurationFields and TestConfigurationFieldsBase | Minor | test | Szilard Nemeth | Szilard Nemeth | | HDFS-16997 | Set the locale to avoid printing useless logs in BlockSender | Major | block placement | Shuyan Zhang | Shuyan Zhang | | YARN-10144 | Federation: Add missing FederationClientInterceptor APIs | Major | federation, router | D M Murali Krishna Reddy | Shilun Fan | | HADOOP-18134 | Setup Jenkins nightly CI for Windows 10 | Critical | build | Gautham Banasandra | Gautham Banasandra | | YARN-11470 | FederationStateStoreFacade Cache Support Guava Cache | Major | federation | Shilun Fan | Shilun Fan | | YARN-11477 | [Federation] MemoryFederationStateStore Support Store ApplicationSubmitData | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18359 | Update commons-cli from 1.2 to 1.5. | Major | common | Shilun Fan | Shilun Fan | | YARN-11479 | [Federation] ZookeeperFederationStateStore Support Store ApplicationSubmitData | Major | federation | Shilun Fan | Shilun Fan | | HDFS-16990 | HttpFS Add Support getFileLinkStatus API | Major | httpfs | Hualong Zhang | Hualong Zhang | | YARN-11351 | [Federation] Router Support Calls SubClusters RMAdminRequest | Major | federation, router | Shilun Fan | Shilun Fan | | HDFS-16978 | RBF: Admin command to support bulk add of mount points | Minor | rbf | Viraj Jasani | Viraj Jasani | | HDFS-17001 | Support getStatus API in WebHDFS | Major | webhdfs | Hualong Zhang | Hualong Zhang | | YARN-11495 | Fix typos in hadoop-yarn-server-web-proxy | Minor | webapp | Shilun Fan | Shilun Fan | | HDFS-17015 | Typos in HDFS Documents | Minor | configuration | Liang Yan | Liang Yan | | HDFS-17009 | RBF: state store putAll should also return failed records | Major | rbf | Viraj Jasani | Viraj Jasani | | HDFS-17012 | Remove unused DFSConfigKeys#DFSDATANODEPMEMCACHEDIRS_DEFAULT | Minor | datanode, hdfs | JiangHua Zhu | JiangHua Zhu | | HDFS-16979 | RBF: Add dfsrouter port in hdfsauditlog | Major | rbf | liuguanghua | liuguanghua | | HDFS-16653 | Improve error messages in ShortCircuitCache | Minor | dfsadmin | ECFuzz | ECFuzz | | HDFS-17014 | HttpFS Add Support getStatus API | Major | webhdfs | Hualong Zhang | Hualong Zhang | | YARN-11496 | Improve TimelineService log format | Minor | timelineservice | Xianming Lei | Xianming Lei | | HDFS-16909 | Improve ReplicaMap#mergeAll method. | Minor | datanode | farmmamba | farmmamba | | HDFS-17020 | RBF: mount table addAll should print failed records in std error | Major | rbf | Viraj Jasani | Viraj Jasani | | HDFS-16908 | Fix javadoc of field IncrementalBlockReportManager#readyToSend. | Major | datanode | farmmamba | farmmamba | | YARN-11276 | Add lru cache for" }, { "data": "| Minor | resourcemanager | Xianming Lei | Xianming Lei | | HDFS-17026 | RBF: NamenodeHeartbeatService should update JMX report with configurable frequency | Major | rbf | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | HDFS-17031 | RBF: Reduce repeated code in RouterRpcServer | Minor | rbf | Chengwei Wang | Chengwei Wang | | HADOOP-18709 | Add curator based ZooKeeper communication support over SSL/TLS into the common library | Major | common | Ferenc Erdelyi | Ferenc Erdelyi | | YARN-11277 | trigger deletion of log-dir by size for NonAggregatingLogHandler | Minor | nodemanager | Xianming Lei | Xianming Lei | | HDFS-17028 | RBF: Optimize debug logs of class ConnectionPool and other related class. | Minor | rbf | farmmamba | farmmamba | | YARN-11497 | Support removal of only selective node states in untracked removal flow | Major | resourcemanager | Mudit Sharma | Mudit Sharma | | HDFS-17029 | Support getECPolices API in WebHDFS | Major | webhdfs | Hualong Zhang | Hualong Zhang | | HDFS-17035 | FsVolumeImpl#getActualNonDfsUsed may return negative value | Minor | datanode | farmmamba | farmmamba | | HADOOP-11219 | [Umbrella] Upgrade to netty 4 | Major | build, common | Haohui Mai | Haohui Mai | | HDFS-17037 | Consider nonDfsUsed when running balancer | Major | balancer & mover | Shuyan Zhang | Shuyan Zhang | | YARN-11504 | [Federation] YARN Federation Supports Non-HA mode. | Major | federation, nodemanager | Shilun Fan | Shilun Fan | | YARN-11429 | Improve updateTestDataAutomatically in TestRMWebServicesCapacitySched | Major | yarn | Tamas Domok | Tamas Domok | | HDFS-17030 | Limit wait time for getHAServiceState in ObserverReaderProxy | Minor | hdfs | Xing Lin | Xing Lin | | HDFS-17042 | Add rpcCallSuccesses and OverallRpcProcessingTime to RpcMetrics for Namenode | Major | hdfs | Xing Lin | Xing Lin | | HDFS-17043 | HttpFS implementation for getAllErasureCodingPolicies | Major | httpfs | Hualong Zhang | Hualong Zhang | | HADOOP-18774 | Add .vscode to gitignore | Major | common | Xiaoqiao He | Xiaoqiao He | | YARN-11506 | The formatted yarn queue list is displayed on the command line | Minor | yarn | Lu Yuan | Lu Yuan | | HDFS-17053 | Optimize method BlockInfoStriped#findSlot to reduce time complexity. | Trivial | hdfs | farmmamba | farmmamba | | YARN-11511 | Improve TestRMWebServices test config and data | Major | capacityscheduler | Tamas Domok | Bence Kosztolnik | | HADOOP-18713 | Update solr from 8.8.2 to 8.11.2 | Minor | common | Xuesen Liang | Xuesen Liang | | HDFS-17057 | RBF: Add DataNode maintenance states to Federation UI | Major | rbf | Haiyang Hu | Haiyang Hu | | HDFS-17055 | Export HAState as a metric from Namenode for monitoring | Minor | hdfs | Xing Lin | Xing Lin | | HDFS-17044 | Set size of non-exist block to NO_ACK when process FBR or IBR to avoid useless report from DataNode | Major | namenode | Haiyang Hu | Haiyang Hu | | HADOOP-18789 | Remove ozone from hadoop dev support | Trivial | common | Xiaoqiao He | Xiaoqiao He | | HDFS-17065 | Fix typos in hadoop-hdfs-project | Minor | hdfs | Zhaohui Wang | Zhaohui Wang | | HADOOP-18779 | Improve hadoop-function.sh#status script | Major | common | Shilun Fan | Shilun Fan | | HDFS-17073 | Enhance the warning message output for BlockGroupNonStripedChecksumComputer#compute | Major | hdfs | Haiyang Hu | Haiyang Hu | | HDFS-17070 | Remove unused import in" }, { "data": "| Trivial | datanode | farmmamba | farmmamba | | HDFS-17064 | Document the usage of the new Balancer sortTopNodes and hotBlockTimeInterval parameter | Major | balancer, documentation | Haiyang Hu | Haiyang Hu | | HDFS-17033 | Update fsck to display stale state info of blocks accurately | Minor | datanode, namanode | WangYuanben | WangYuanben | | HDFS-17076 | Remove the unused method isSlownodeByNameserviceId in DataNode | Major | datanode | Haiyang Hu | Haiyang Hu | | HADOOP-18794 | ipc.server.handler.queue.size missing from core-default.xml | Major | rpc-server | WangYuanben | WangYuanben | | HDFS-17082 | Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md | Major | documentation | Haiyang Hu | Haiyang Hu | | HDFS-17083 | Support getErasureCodeCodecs API in WebHDFS | Major | webhdfs | Hualong Zhang | Hualong Zhang | | HDFS-17068 | Datanode should record last directory scan time. | Minor | datanode | farmmamba | farmmamba | | HDFS-17086 | Fix the parameter settings in TestDiskspaceQuotaUpdate#updateCountForQuota. | Major | test | Haiyang Hu | Haiyang Hu | | HADOOP-18801 | Delete path directly when it can not be parsed in trash | Major | common | farmmamba | farmmamba | | HDFS-17075 | Reconfig disk balancer parameters for datanode | Major | datanode | Haiyang Hu | Haiyang Hu | | HDFS-17091 | Blocks on DECOMMISSIONING DNs should be sorted properly in LocatedBlocks | Major | hdfs | WangYuanben | WangYuanben | | HDFS-17088 | Improve the debug verifyEC and dfsrouteradmin commands in HDFSCommands.md | Major | dfsadmin | Haiyang Hu | Haiyang Hu | | YARN-11540 | Fix typo: form -> from | Trivial | nodemanager | Seokchan Yoon | Seokchan Yoon | | HDFS-17074 | Remove incorrect comment in TestRedudantBlocks#setup | Trivial | test | farmmamba | farmmamba | | HDFS-17112 | Show decommission duration in JMX and HTML | Major | namenode | Shuyan Zhang | Shuyan Zhang | | HDFS-17119 | RBF: Logger fix for StateStoreMySQLImpl | Trivial | rbf | Zhaohui Wang | Zhaohui Wang | | HDFS-17115 | HttpFS Add Support getErasureCodeCodecs API | Major | httpfs | Hualong Zhang | Hualong Zhang | | HDFS-17117 | Print reconstructionQueuesInitProgress periodically when BlockManager processMisReplicatesAsync. | Major | namenode | Haiyang Hu | Haiyang Hu | | HDFS-17116 | RBF: Update invoke millisecond time as monotonicNow() in RouterSafemodeService. | Major | rbf | Haiyang Hu | Haiyang Hu | | HDFS-17135 | Update fsck -blockId to display excess state info of blocks | Major | namnode | Haiyang Hu | Haiyang Hu | | HDFS-17136 | Fix annotation description and typo in BlockPlacementPolicyDefault Class | Minor | block placement | Zhaobo Huang | Zhaobo Huang | | YARN-11416 | FS2CS should use CapacitySchedulerConfiguration in FSQueueConverterBuilder | Major | capacity scheduler | Benjamin Teke | Susheel Gupta | | HDFS-17118 | Fix minor checkstyle warnings in TestObserverReadProxyProvider | Trivial | hdfs | Xing Lin | Xing Lin | | HADOOP-18836 | Some properties are missing from hadoop-policy.xml. | Major | common, documentation, security | WangYuanben | WangYuanben | | HADOOP-18810 | Document missing a lot of properties in core-default.xml | Major | common, documentation | WangYuanben | WangYuanben | | HDFS-17144 | Remove incorrect comment in method storeAllocatedBlock | Trivial | namenode | farmmamba | farmmamba | | HDFS-17137 | Standby/Observer NameNode skip to handle redundant replica block logic when set decrease replication. | Major | namenode | Haiyang Hu | Haiyang Hu | | HADOOP-18840 | Add enQueue time to RpcMetrics | Minor | rpc-server | Liangjun He | Liangjun He | | HDFS-17145 | Fix description of property" }, { "data": "| Trivial | documentation | farmmamba | farmmamba | | HDFS-17148 | RBF: SQLDelegationTokenSecretManager must cleanup expired tokens in SQL | Major | rbf | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | HDFS-17087 | Add Throttler for datanode reading block | Major | datanode | Haiyang Hu | Haiyang Hu | | HDFS-17162 | RBF: Add missing comments in StateStoreService | Minor | rbf | TIsNotT | TIsNotT | | HADOOP-18328 | S3A supports S3 on Outposts | Major | fs/s3 | Sotetsu Suzugamine | Sotetsu Suzugamine | | HDFS-17168 | Support getTrashRoots API in WebHDFS | Major | webhdfs | Hualong Zhang | Hualong Zhang | | HADOOP-18880 | Add some rpc related metrics to Metrics.md | Major | documentation | Haiyang Hu | Haiyang Hu | | HDFS-17140 | Revisit the BPOfferService.reportBadBlocks() method. | Minor | datanode | Liangjun He | Liangjun He | | YARN-11564 | Fix wrong config in yarn-default.xml | Major | router | Chenyu Zheng | Chenyu Zheng | | HDFS-17177 | ErasureCodingWork reconstruct ignore the block length is Long.MAX_VALUE. | Major | erasure-coding | Haiyang Hu | Haiyang Hu | | HDFS-17139 | RBF: For the doc of the class RouterAdminProtocolTranslatorPB, it describes the function of the class ClientNamenodeProtocolTranslatorPB | Minor | rbf | Jian Zhang | Jian Zhang | | HDFS-17178 | BootstrapStandby needs to handle RollingUpgrade | Minor | namenode | Danny Becker | Danny Becker | | MAPREDUCE-7453 | Revert HADOOP-18649 | Major | mrv2 | Chenyu Zheng | Chenyu Zheng | | HDFS-17180 | HttpFS Add Support getTrashRoots API | Major | webhdfs | Hualong Zhang | Hualong Zhang | | HDFS-17192 | Add bock info when constructing remote block reader meets IOException | Trivial | hdfs-client | farmmamba | farmmamba | | HADOOP-18797 | Support Concurrent Writes With S3A Magic Committer | Major | fs/s3 | Emanuel Velzi | Syed Shameerur Rahman | | HDFS-17184 | Improve BlockReceiver to throws DiskOutOfSpaceException when initialize | Major | datanode | Haiyang Hu | Haiyang Hu | | YARN-11567 | Aggregate container launch debug artifacts automatically in case of error | Minor | yarn | Bence Kosztolnik | Bence Kosztolnik | | HDFS-17197 | Show file replication when listing corrupt files. | Major | fs, namanode | Shuyan Zhang | Shuyan Zhang | | HDFS-17204 | EC: Reduce unnecessary log when processing excess redundancy. | Major | erasure-coding | Shuyan Zhang | Shuyan Zhang | | YARN-11468 | Zookeeper SSL/TLS support | Critical | resourcemanager | Ferenc Erdelyi | Ferenc Erdelyi | | HDFS-17211 | Fix comments in the RemoteParam class | Minor | rbf | xiaojunxiang | xiaojunxiang | | HDFS-17194 | Enhance the log message for striped block recovery | Major | logging | Haiyang Hu | Haiyang Hu | | HADOOP-18917 | upgrade to commons-io 2.14.0 | Major | build | PJ Fanning | PJ Fanning | | HDFS-17205 | HdfsServerConstants.MINBLOCKSFOR_WRITE should be configurable | Major | hdfs | Haiyang Hu | Haiyang Hu | | HDFS-17200 | Add some datanode related metrics to Metrics.md | Minor | datanode, metrics | Zhaobo Huang | Zhaobo Huang | | HDFS-17171 | CONGESTION_RATIO should be configurable | Minor | datanode | farmmamba | farmmamba | | HDFS-16740 | Mini cluster test flakiness | Major | hdfs, test | Steve Vaughan | Steve Vaughan | | HDFS-17208 | Add the metrics PendingAsyncDiskOperations in datanode | Major | datanode | Haiyang Hu | Haiyang Hu | | HDFS-17217 | Add lifeline RPC start up log" }, { "data": "NameNode#startCommonServices | Major | namenode | Haiyang Hu | Haiyang Hu | | HADOOP-18890 | remove okhttp usage | Major | build, common | PJ Fanning | PJ Fanning | | HADOOP-18926 | Add documentation related to NodeFencer | Minor | documentation, ha | JiangHua Zhu | JiangHua Zhu | | YARN-11583 | Improve Node Link for YARN Federation Web Page | Major | federation | Shilun Fan | Shilun Fan | | YARN-11469 | Refactor FederationStateStoreFacade Cache Code | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18916 | module-info classes from external dependencies appearing in uber jars | Major | build | PJ Fanning | PJ Fanning | | HDFS-17210 | Optimize AvailableSpaceBlockPlacementPolicy | Minor | hdfs | Fei Guo | guophilipse | | HDFS-17228 | Improve documentation related to BlockManager | Minor | block placement, documentation | JiangHua Zhu | JiangHua Zhu | | HADOOP-18867 | Upgrade ZooKeeper to 3.6.4 | Minor | build | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-18942 | Upgrade ZooKeeper to 3.7.2 | Major | common | Masatake Iwasaki | Masatake Iwasaki | | HDFS-17235 | Fix javadoc errors in BlockManager | Major | documentation | Haiyang Hu | Haiyang Hu | | HADOOP-18919 | Zookeeper SSL/TLS support in HDFS ZKFC | Major | common | Zita Dombi | Zita Dombi | | HADOOP-18868 | Optimize the configuration and use of callqueue overflow trigger failover | Major | common | Haiyang Hu | Haiyang Hu | | HADOOP-18949 | upgrade maven dependency plugin due to security issue | Major | build | PJ Fanning | PJ Fanning | | HADOOP-18920 | RPC Metrics : Optimize logic for log slow RPCs | Major | metrics | Haiyang Hu | Haiyang Hu | | HADOOP-18933 | upgrade netty to 4.1.100 due to CVE | Major | build | PJ Fanning | PJ Fanning | | HDFS-15273 | CacheReplicationMonitor hold lock for long time and lead to NN out of service | Major | caching, namenode | Xiaoqiao He | Xiaoqiao He | | YARN-11592 | Add timeout to GPGUtils#invokeRMWebService. | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18936 | Upgrade to jetty 9.4.53 | Major | build | PJ Fanning | PJ Fanning | | MAPREDUCE-7457 | Limit number of spill files getting created | Critical | mrv2 | Mudit Sharma | Mudit Sharma | | HADOOP-18963 | Fix typos in .gitignore | Minor | common | | | | HDFS-17243 | Add the parameter storage type for getBlocks method | Major | balancer | Haiyang Hu | Haiyang Hu | | HDFS-16791 | Add getEnclosingRoot() API to filesystem interface and implementations | Major | fs | Tom McCormick | Tom McCormick | | HADOOP-18954 | Filter NaN values from JMX json interface | Major | common | Bence Kosztolnik | Bence Kosztolnik | | HADOOP-18964 | Update plugin for SBOM generation to 2.7.10 | Major | common | Vinod Anandan | Vinod Anandan | | HDFS-17172 | Support FSNamesystemLock Parameters reconfigurable | Major | namanode | Haiyang Hu | Haiyang Hu | | HADOOP-18956 | Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and ZKSignerSecretProvider | Major | common | Zita Dombi | Istvn Fajth | | HADOOP-18957 | Use StandardCharsets.UTF_8 constant | Major | common | PJ Fanning | PJ Fanning | | YARN-11611 | Remove json-io to 4.14.1 due to CVE-2023-34610 | Major | yarn | Benjamin Teke | Benjamin Teke | | HDFS-17263 | RBF: Fix client ls trash path cannot get except default nameservices trash path | Major | rbf | liuguanghua | liuguanghua | | HADOOP-18924 | Upgrade grpc jars to v1.53.0 due to CVEs | Major | build | PJ Fanning | PJ Fanning | | HDFS-17259 | Fix typo in TestFsDatasetImpl Class. | Trivial | test | Zhaobo Huang | Zhaobo Huang | | HDFS-17218 | NameNode should process time out excess redundancy blocks | Major | namanode | Haiyang Hu | Haiyang Hu | | HDFS-17250" }, { "data": "EditLogTailer#triggerActiveLogRoll should handle thread Interrupted | Major | hdfs | Haiyang Hu | Haiyang Hu | | YARN-11420 | Stabilize TestNMClient | Major | yarn | Bence Kosztolnik | Susheel Gupta | | HADOOP-18982 | Fix doc about loading native libraries | Major | documentation | Shuyan Zhang | Shuyan Zhang | | YARN-11423 | [Federation] Router Supports CLI Commands | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18925 | S3A: add option fs.s3a.optimized.copy.from.local.enabled to enable/disable CopyFromLocalOperation | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18989 | Use thread pool to improve the speed of creating control files in TestDFSIO | Major | benchmarks, common | farmmamba | farmmamba | | HDFS-17279 | RBF: Fix link to Fedbalance document | Major | rbf | Haiyang Hu | Haiyang Hu | | HDFS-17272 | NNThroughputBenchmark should support specifying the base directory for multi-client test | Major | namenode | caozhiqiang | caozhiqiang | | HDFS-17152 | Fix the documentation of count command in FileSystemShell.md | Trivial | documentation | farmmamba | farmmamba | | HDFS-17242 | Make congestion backoff time configurable | Minor | hdfs-client | farmmamba | farmmamba | | HDFS-17282 | Reconfig SlowIoWarningThreshold parameters for datanode. | Minor | datanode | Zhaobo Huang | Zhaobo Huang | | YARN-11630 | Passing admin Java options to container localizers | Major | yarn | Peter Szucs | Peter Szucs | | YARN-11563 | Fix typo in AbstractContainerAllocator from CSAssignemnt to CSAssignment | Trivial | capacityscheduler | wangzhongwei | wangzhongwei | | HADOOP-18613 | Upgrade ZooKeeper to version 3.8.3 | Major | common | Tamas Penzes | Bilwa S T | | YARN-11634 | Speed-up TestTimelineClient | Minor | yarn | Bence Kosztolnik | Bence Kosztolnik | | HDFS-17285 | RBF: Add a safe mode check period configuration | Minor | rbf | liuguanghua | liuguanghua | | HDFS-17215 | RBF: Fix some method annotations about @throws | Minor | rbf | xiaojunxiang | xiaojunxiang | | HDFS-17275 | Judge whether the block has been deleted in the block report | Minor | hdfs | lei w | lei w | | HDFS-17297 | The NameNode should remove block from the BlocksMap if the block is marked as deleted. | Major | namanode | Haiyang Hu | Haiyang Hu | | HDFS-17277 | Delete invalid code logic in namenode format | Minor | namenode | zhangzhanchang | zhangzhanchang | | HADOOP-18540 | Upgrade Bouncy Castle to 1.70 | Major | build | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | HDFS-17310 | DiskBalancer: Enhance the log message for submitPlan | Major | datanode | Haiyang Hu | Haiyang Hu | | HDFS-17023 | RBF: Record proxy time when call invokeConcurrent method. | Minor | rbf | farmmamba | farmmamba | | YARN-11529 | Add metrics for ContainerMonitorImpl. | Minor | nodemanager | Xianming Lei | Xianming Lei | | HDFS-17306 | RBF:Router should not return nameservices that does not enable observer nodes in RpcResponseHeaderProto | Major | rdf, router | liuguanghua | liuguanghua | | HDFS-17322 | RetryCache#MAXCAPACITY seems to be MINCAPACITY | Trivial | ipc | farmmamba | farmmamba | | HDFS-17325 | Doc: Fix the documentation of fs expunge command in FileSystemShell.md | Minor | documentation, fs | liuguanghua | liuguanghua | | HDFS-17315 | Optimize the namenode format code logic. | Major | namenode | Zhaobo Huang | Zhaobo Huang | | YARN-11642 | Fix Flaky" }, { "data": "TestTimelineAuthFilterForV2#testPutTimelineEntities | Major | timelineservice | Shilun Fan | Shilun Fan | | HDFS-17317 | DebugAdmin metaOut not need multiple close | Major | hdfs | xy | xy | | HDFS-17312 | packetsReceived metric should ignore heartbeat packet | Major | datanode | farmmamba | farmmamba | | HDFS-17128 | RBF: SQLDelegationTokenSecretManager should use version of tokens updated by other routers | Major | rbf | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | HADOOP-19034 | Fix Download Maven Url Not Found | Major | common | Shilun Fan | Shilun Fan | | MAPREDUCE-7468 | Change add-opens flags default value from true to false | Major | mrv2 | Benjamin Teke | Benjamin Teke | | HADOOP-18895 | upgrade to commons-compress 1.24.0 due to CVE | Major | build | PJ Fanning | PJ Fanning | | HADOOP-19040 | mvn site commands fails due to MetricsSystem And MetricsSystemImpl changes. | Major | build | Shilun Fan | Shilun Fan | | HADOOP-19031 | Enhance access control for RunJar | Major | security | Xiaoqiao He | Xiaoqiao He | | HADOOP-19038 | Improve create-release RUN script | Major | build | Shilun Fan | Shilun Fan | | HDFS-17343 | Revert HDFS-16016. BPServiceActor to provide new thread to handle IBR | Major | namenode | Shilun Fan | Shilun Fan | | HADOOP-19039 | Hadoop 3.4.0 Highlight big features and improvements. | Major | common | Shilun Fan | Shilun Fan | | YARN-10888 | [Umbrella] New capacity modes for CS | Major | capacity scheduler | Szilard Nemeth | Benjamin Teke | | HADOOP-19051 | Hadoop 3.4.0 Big feature/improvement highlight addendum | Major | common | Benjamin Teke | Benjamin Teke | | YARN-10889 | [Umbrella] Queue Creation in Capacity Scheduler - Tech debts | Major | capacity scheduler | Szilard Nemeth | Benjamin Teke | | HDFS-17359 | EC: recheck failed streamers should only after flushing all packets. | Minor | ec | farmmamba | farmmamba | | HADOOP-18987 | Corrections to Hadoop FileSystem API Definition | Minor | documentation | Dieter De Paepe | Dieter De Paepe | | HADOOP-18993 | S3A: Add option fs.s3a.classloader.isolation (#6301) | Minor | fs/s3 | Antonio Murgia | Antonio Murgia | | HADOOP-19059 | S3A: update AWS SDK to 2.23.19 to support S3 Access Grants | Minor | build, fs/s3 | Jason Han | Jason Han | | HADOOP-18930 | S3A: make fs.s3a.create.performance an option you can set for the entire bucket | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-19065 | Update Protocol Buffers installation to 3.21.12 | Major | build | Zhaobo Huang | Zhaobo Huang | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:|:|:--|:--|:-|:-| | HDFS-15196 | RBF: RouterRpcServer getListing cannot list large dirs correctly | Critical | rbf | Fengnan Li | Fengnan Li | | HDFS-15252 | HttpFS: setWorkingDirectory should not accept invalid paths | Major | httpfs | Hemanth Boyina | Hemanth Boyina | | HDFS-15256 | Fix typo in DataXceiverServer#run() | Trivial | datanode | Lisheng Sun | Lisheng Sun | | HDFS-15249 | ThrottledAsyncChecker is not" }, { "data": "| Major | federation | Toshihiro Suzuki | Toshihiro Suzuki | | HDFS-15263 | Fix the logic of scope and excluded scope in Network Topology | Major | net | Ayush Saxena | Ayush Saxena | | YARN-10207 | CLOSE_WAIT socket connection leaks during rendering of (corrupted) aggregated logs on the JobHistoryServer Web UI | Major | yarn | Siddharth Ahuja | Siddharth Ahuja | | YARN-10226 | NPE in Capacity Scheduler while using %primary_group queue mapping | Critical | capacity scheduler | Peter Bacsko | Peter Bacsko | | HDFS-15269 | NameNode should check the authorization API version only once during initialization | Blocker | namenode | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-16962 | Making `getBoolean` log warning message for unrecognized value | Major | conf | Ctest | Ctest | | HADOOP-16967 | TestSequenceFile#testRecursiveSeqFileCreate fails in subsequent run | Minor | common, test | Ctest | Ctest | | MAPREDUCE-7272 | TaskAttemptListenerImpl excessive log messages | Major | test | Ahmed Hussein | Ahmed Hussein | | HADOOP-16958 | NPE when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL | Critical | common, ha | Ctest | Ctest | | YARN-10219 | YARN service placement constraints is broken | Blocker | yarn | Eric Yang | Eric Yang | | YARN-10233 | [YARN UI2] No Logs were found in YARN Daemon Logs page | Blocker | yarn-ui-v2 | Akhil PB | Akhil PB | | MAPREDUCE-7273 | JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again | Major | jobhistoryserver | Peter Bacsko | Peter Bacsko | | HDFS-15266 | Add missing DFSOps Statistics in WebHDFS | Major | webhdfs | Ayush Saxena | Ayush Saxena | | HDFS-15218 | RBF: MountTableRefresherService failed to refresh other router MountTableEntries in secure mode. | Major | rbf | Surendra Singh Lilhore | Surendra Singh Lilhore | | HADOOP-16971 | TestFileContextResolveAfs#testFileContextResolveAfs creates dangling link and fails for subsequent runs | Minor | common, fs, test | Ctest | Ctest | | HDFS-15275 | HttpFS: Response of Create was not correct with noredirect and data are true | Major | httpfs | Hemanth Boyina | Hemanth Boyina | | HDFS-15276 | Concat on INodeRefernce fails with illegal state exception | Critical | namanode | Hemanth Boyina | Hemanth Boyina | | HDFS-15281 | ZKFC ignores dfs.namenode.rpc-bind-host and uses dfs.namenode.rpc-address to bind to host address | Major | ha, namenode | Dhiraj Hegde | Dhiraj Hegde | | HDFS-15297 | TestNNHandlesBlockReportPerStorage::blockReport_02 fails intermittently in trunk | Major | datanode, test | Mingliang Liu | Ayush Saxena | | HDFS-15298 | Fix the findbugs warnings introduced in HDFS-15217 | Major | namanode | Toshihiro Suzuki | Toshihiro Suzuki | | HDFS-15301 | statfs function in hdfs-fuse is not working | Major | fuse-dfs, libhdfs | Aryan Gupta | Aryan Gupta | | HDFS-15210 | EC : File write hanged when DN is shutdown by admin command. | Major | ec | Surendra Singh Lilhore | Surendra Singh Lilhore | | HDFS-15285 | The same distance and load nodes dont shuffle when consider DataNode load | Major | datanode | Lisheng Sun | Lisheng Sun | | HDFS-15265 | HttpFS: validate content-type in HttpFSUtils | Major | httpfs | Hemanth Boyina | Hemanth Boyina | | HDFS-15309 | Remove redundant String.valueOf method on ExtendedBlockId.java | Trivial | hdfs-client | bianqi | bianqi | | HADOOP-16957 | NodeBase.normalize doesnt removing all trailing slashes. | Major | net | Ayush Saxena | Ayush Saxena | | HADOOP-17011 | Tolerate leading and trailing spaces in fs.defaultFS | Major | common | Ctest | Ctest | | HDFS-15320 | StringIndexOutOfBoundsException in HostRestrictingAuthorizationFilter | Major | webhdfs | Akira Ajisaka | Akira Ajisaka | | HDFS-15325 | TestRefreshCallQueue is failing due to changed CallQueue constructor | Major | test | Konstantin Shvachko | Fengnan Li | | YARN-10256 | Refactor" }, { "data": "| Major | test | Ahmed Hussein | Ahmed Hussein | | HDFS-15270 | Account for *env == NULL in hdfsThreadDestructor | Major | libhdfs | Babneet Singh | Babneet Singh | | HDFS-15331 | Remove invalid exclusions that minicluster dependency on HDFS | Major | build | Wanqiang Ji | Wanqiang Ji | | YARN-8959 | TestContainerResizing fails randomly | Minor | test | Bibin Chundatt | Ahmed Hussein | | HDFS-15332 | Quota Space consumed was wrong in truncate with Snapshots | Major | qouta, snapshots | Hemanth Boyina | Hemanth Boyina | | YARN-9017 | PlacementRule order is not maintained in CS | Major | capacity scheduler | Bibin Chundatt | Bilwa S T | | HDFS-15323 | StandbyNode fails transition to active due to insufficient transaction tailing | Major | namenode, qjm | Konstantin Shvachko | Konstantin Shvachko | | HADOOP-17025 | Fix invalid metastore configuration in S3GuardTool tests | Minor | fs/s3, test | Masatake Iwasaki | Masatake Iwasaki | | HDFS-15339 | TestHDFSCLI fails for user names with the dot/dash character | Major | test | Yan Xiaole | Yan Xiaole | | HDFS-15250 | Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException | Major | hdfs-client | Ctest | Ctest | | HADOOP-16768 | SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data | Major | io, test | zhao bo | Akira Ajisaka | | HDFS-15243 | Add an option to prevent sub-directories of protected directories from deletion | Major | namenode | liuyanyu | liuyanyu | | HDFS-14367 | EC: Parameter maxPoolSize in striped reconstruct thread pool isnt affecting number of threads | Major | ec | Guo Lei | Guo Lei | | YARN-9301 | Too many InvalidStateTransitionException with SLS | Major | scheduler-load-simulator | Bibin Chundatt | Bilwa S T | | HADOOP-17035 | Trivial typo(s) which are timout, interruped in comment, LOG and documents | Trivial | documentation | Sungpeo Kook | Sungpeo Kook | | HDFS-15300 | RBF: updateActiveNamenode() is invalid when RPC address is IP | Major | rbf | ZanderXu | ZanderXu | | HADOOP-15524 | BytesWritable causes OOME when array size reaches Integer.MAX_VALUE | Major | io | Joseph Smith | Joseph Smith | | YARN-10154 | CS Dynamic Queues cannot be configured with absolute resources | Major | capacity scheduler | Sunil G | Manikandan R | | HDFS-15316 | Deletion failure should not remove directory from snapshottables | Major | namanode | Hemanth Boyina | Hemanth Boyina | | YARN-9898 | Dependency netty-all-4.1.27.Final doesnt support ARM platform | Major | buid | liusheng | liusheng | | YARN-10265 | Upgrade Netty-all dependency to latest version 4.1.50 to fix ARM support issue | Major | buid | liusheng | liusheng | | YARN-9444 | YARN API ResourceUtilss getRequestedResourcesFromConfig doesnt recognize yarn.io/gpu as a valid resource | Minor | api | Gergely Pollk | Gergely Pollk | | HDFS-15293 | Relax the condition for accepting a fsimage when receiving a checkpoint | Critical | namenode | Chen Liang | Chen Liang | | HADOOP-17024 | ListStatus on ViewFS root (ls /) should list the linkFallBack root (configured target" }, { "data": "| Major | fs, viewfs | Uma Maheswara Rao G | Abhishek Das | | MAPREDUCE-6826 | Job fails with InvalidStateTransitonException: Invalid event: JOBTASKCOMPLETED at SUCCEEDED/COMMITTING | Major | mrv2 | Varun Saxena | Bilwa S T | | HADOOP-16586 | ITestS3GuardFsck, others fails when run using a local metastore | Major | fs/s3 | Siddharth Seth | Masatake Iwasaki | | HADOOP-16900 | Very large files can be truncated when written through S3AFileSystem | Major | fs/s3 | Andrew Olson | Mukund Thakur | | YARN-10228 | Yarn Service fails if am java opts contains ZK authentication file path | Major | yarn | Bilwa S T | Bilwa S T | | HADOOP-17049 | javax.activation-api and jakarta.activation-api define overlapping classes | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17040 | Fix intermittent failure of ITestBlockingThreadPoolExecutorService | Minor | fs/s3, test | Masatake Iwasaki | Masatake Iwasaki | | HDFS-15363 | BlockPlacementPolicyWithNodeGroup should validate if it is initialized by NetworkTopologyWithNodeGroup | Major | block placement | Hemanth Boyina | Hemanth Boyina | | HDFS-15093 | RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is specified | Major | hdfs-client | Harshakiran Reddy | Ayush Saxena | | HDFS-12288 | Fix DataNodes xceiver count calculation | Major | datanode, hdfs | Lukas Majercak | Lisheng Sun | | HDFS-15373 | Fix number of threads in IPCLoggerChannel#createParallelExecutor | Major | journal-node | Ayush Saxena | Ayush Saxena | | HDFS-15362 | FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all distinct blocks | Major | snapshots | Hemanth Boyina | Hemanth Boyina | | MAPREDUCE-7278 | Speculative execution behavior is observed even when mapreduce.map.speculative and mapreduce.reduce.speculative are false | Major | task | Tarun Parimi | Tarun Parimi | | HADOOP-7002 | Wrong description of copyFromLocal and copyToLocal in documentation | Minor | documentation | Jingguo Yao | Andras Bokor | | HADOOP-17052 | NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort | Major | net | Dhiraj Hegde | Dhiraj Hegde | | YARN-10254 | CapacityScheduler incorrect User Group Mapping after leaf queue change | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | HADOOP-17062 | Fix shelldocs path in Jenkinsfile | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17056 | shelldoc fails in hadoop-common | Major | build | Akira Ajisaka | Akira Ajisaka | | YARN-10286 | PendingContainers bugs in the scheduler outputs | Critical | resourcemanager | Adam Antal | Andras Gyori | | HDFS-15396 | Fix TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir | Major | test | Ayush Saxena | Ayush Saxena | | HDFS-15386 | ReplicaNotFoundException keeps happening in DN after removing multiple DNs data directories | Major | datanode | Toshihiro Suzuki | Toshihiro Suzuki | | HDFS-15398 | EC: hdfs client hangs due to exception during addBlock | Critical | ec, hdfs-client | Hongbing Wang | Hongbing Wang | | YARN-10300 | appMasterHost not set in RM ApplicationSummary when AM fails before first heartbeat | Major | am | Eric Badger | Eric Badger | | HADOOP-17059 | ArrayIndexOfboundsException in ViewFileSystem#listStatus | Major | viewfs | Hemanth Boyina | Hemanth Boyina | | YARN-10296 | Make ContainerPBImpl#getId/setId synchronized | Minor | yarn-common | Benjamin Teke | Benjamin Teke | | HADOOP-17060 | listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory | Major | viewfs | Srinivasu Majeti | Uma Maheswara Rao G | | YARN-10312 | Add support for yarn logs -logFile to retain backward compatibility | Major | client | Jim Brennan | Jim Brennan | | HDFS-15351 | Blocks scheduled count was wrong on truncate | Major | block placement | Hemanth Boyina | Hemanth Boyina | | HDFS-15403 | NPE" }, { "data": "FileIoProvider#transferToSocketFully | Major | datanode | Hemanth Boyina | Hemanth Boyina | | HDFS-15372 | Files in snapshots no longer see attribute provider permissions | Major | snapshots | Stephen ODonnell | Stephen ODonnell | | HADOOP-9851 | dfs -chown does not like + plus sign in user name | Minor | fs | Marc Villacorta | Andras Bokor | | YARN-10308 | Update javadoc and variable names for keytab in yarn services as it supports filesystems other than hdfs and local file system | Minor | documentation | Bilwa S T | Bilwa S T | | MAPREDUCE-7281 | Fix NoClassDefFoundError on mapred minicluster | Major | scripts | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-17029 | ViewFS does not return correct user/group and ACL | Major | fs, viewfs | Abhishek Das | Abhishek Das | | HDFS-14546 | Document block placement policies | Major | documentation | igo Goiri | Amithsha | | HADOOP-17068 | client fails forever when namenode ipaddr changed | Major | hdfs-client | Sean Chow | Sean Chow | | HDFS-15378 | TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on trunk | Major | test | Hemanth Boyina | Hemanth Boyina | | YARN-10328 | Too many ZK Curator NodeExists exception logs in YARN Service AM logs | Major | yarn | Bilwa S T | Bilwa S T | | YARN-9903 | Support reservations continue looking for Node Labels | Major | capacity scheduler | Tarun Parimi | Jim Brennan | | YARN-10331 | Upgrade node.js to 10.21.0 | Critical | build, yarn-ui-v2 | Akira Ajisaka | Akira Ajisaka | | HADOOP-17032 | Handle an internal dir in viewfs having multiple children mount points pointing to different filesystems | Major | fs, viewfs | Abhishek Das | Abhishek Das | | YARN-10318 | ApplicationHistory Web UI incorrect column indexing | Minor | yarn | Andras Gyori | Andras Gyori | | YARN-10330 | Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule | Major | capacity scheduler, capacityscheduler, test | Peter Bacsko | Peter Bacsko | | HDFS-15446 | CreateSnapshotOp fails during edit log loading for /.reserved/raw/path with error java.io.FileNotFoundException: Directory does not exist: /.reserved/raw/path | Major | hdfs | Srinivasu Majeti | Stephen ODonnell | | HDFS-15451 | Restarting name node stuck in safe mode when using provided storage | Major | namenode | shanyu zhao | shanyu zhao | | HADOOP-17117 | Fix typos in hadoop-aws documentation | Trivial | documentation, fs/s3 | Sebastian Nagel | Sebastian Nagel | | YARN-10344 | Sync netty versions in hadoop-yarn-csi | Major | build | Akira Ajisaka | Akira Ajisaka | | YARN-10341 | Yarn Service Container Completed event doesnt get processed | Critical | service-scheduler | Bilwa S T | Bilwa S T | | HADOOP-17116 | Skip Retry INFO logging on first failover from a proxy | Major | ha | Hanisha Koneru | Hanisha Koneru | | HADOOP-16998 | WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException | Major | fs/azure | Anoop Sam John | Anoop Sam John | | YARN-10348 | Allow RM to always cancel tokens after app completes | Major | yarn | Jim Brennan | Jim Brennan | | MAPREDUCE-7284 | TestCombineFileInputFormat#testMissingBlocks fails | Major | test | Akira Ajisaka | Akira Ajisaka | | YARN-10350 | TestUserGroupMappingPlacementRule fails | Major | test | Akira Ajisaka | Bilwa S T | | MAPREDUCE-7285 | Junit class missing from hadoop-mapreduce-client-jobclient-*-tests jar | Major | test | Eric Badger | Masatake Iwasaki | | HDFS-14498 | LeaseManager can loop forever on the file for which create has failed | Major | namenode | Sergey Shelukhin | Stephen ODonnell | | YARN-10339 | Timeline Client in Nodemanager gets 403 errors when simple auth is used in kerberos environments | Major | timelineclient | Tarun Parimi | Tarun Parimi | | HDFS-15198 | RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode | Major | rbf | Chenyu Zheng | Chenyu Zheng | | HADOOP-17119 | Jetty upgrade to" }, { "data": "causes MR app fail with IOException | Major | common | Bilwa S T | Bilwa S T | | HDFS-15246 | ArrayIndexOfboundsException in BlockManager CreateLocatedBlock | Major | namanode | Hemanth Boyina | Hemanth Boyina | | HADOOP-17138 | Fix spotbugs warnings surfaced after upgrade to 4.0.6 | Minor | build | Masatake Iwasaki | Masatake Iwasaki | | YARN-4771 | Some containers can be skipped during log aggregation after NM restart | Major | nodemanager | Jason Darrell Lowe | Jim Brennan | | HADOOP-17153 | Add boost installation steps to build instruction on CentOS 8 | Major | documentation | Masatake Iwasaki | Masatake Iwasaki | | YARN-10367 | Failed to get nodejs 10.21.0 when building docker image | Blocker | build, webapp | Akira Ajisaka | Akira Ajisaka | | YARN-10362 | Javadoc for TimelineReaderAuthenticationFilterInitializer is broken | Minor | documentation | Xieming Li | Xieming Li | | YARN-10366 | Yarn rmadmin help message shows two labels for one node for replaceLabelsOnNode | Major | yarn | Tanu Ajmera | Tanu Ajmera | | MAPREDUCE-7051 | Fix typo in MultipleOutputFormat | Trivial | mrv1 | ywheel | ywheel | | HDFS-15313 | Ensure inodes in active filesystem are not deleted during snapshot delete | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HDFS-14950 | missing libhdfspp libs in dist-package | Major | build, libhdfs++ | Yuan Zhou | Yuan Zhou | | YARN-10359 | Log container report only if list is not empty | Minor | nodemanager | Bilwa S T | Bilwa S T | | HDFS-15229 | Truncate info should be logged at INFO level | Major | . | Ravuri Sushma sree | Ravuri Sushma sree | | HDFS-15503 | File and directory permissions are not able to be modified from WebUI | Major | ui | Hemanth Boyina | Hemanth Boyina | | YARN-10383 | YarnCommands.md is inconsistent with the source code | Minor | documentation | zhaoshengjie | zhaoshengjie | | YARN-10377 | Clicking on queue in Capacity Scheduler legacy ui does not show any applications | Major | ui | Tarun Parimi | Tarun Parimi | | HADOOP-17184 | Add mvn-custom-repos parameter to yetus calls | Major | build | Mingliang Liu | Mingliang Liu | | HDFS-15499 | Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion | Major | httpfs | Mingliang Liu | Mingliang Liu | | HADOOP-17186 | Fixing javadoc in ListingOperationCallbacks | Major | build, documentation | Akira Ajisaka | Mukund Thakur | | HADOOP-17164 | UGI loginUserFromKeytab doesnt set the last login time | Major | security | Sandeep Guggilam | Sandeep Guggilam | | YARN-4575 | ApplicationResourceUsageReport should return ALL reserved resource | Major | scheduler | Bibin Chundatt | Bibin Chundatt | | YARN-10388 | RMNode updatedCapability flag not set while RecommissionNodeTransition | Major | resourcemanager | Pranjal Protim Borah | Pranjal Protim Borah | | HADOOP-17182 | Dead links in breadcrumbs | Major | documentation | Akira Ajisaka | Akira Ajisaka | | HDFS-15443 | Setting dfs.datanode.max.transfer.threads to a very small value can cause strange" }, { "data": "| Major | datanode | AMC-team | AMC-team | | YARN-10364 | Absolute Resource [memory=0] is considered as Percentage config type | Major | capacity scheduler | Prabhu Joseph | Bilwa S T | | HDFS-15508 | [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module | Major | documentation | Akira Ajisaka | Akira Ajisaka | | HDFS-15506 | [JDK 11] Fix javadoc errors in hadoop-hdfs module | Major | documentation | Akira Ajisaka | Xieming Li | | HDFS-15507 | [JDK 11] Fix javadoc errors in hadoop-hdfs-client module | Major | documentation | Akira Ajisaka | Xieming Li | | HADOOP-17196 | Fix C/C++ standard warnings | Major | build | Gautham Banasandra | Gautham Banasandra | | HDFS-15523 | Fix the findbugs warnings from HDFS-15520 | Major | namenode | Tsz-wo Sze | Tsz-wo Sze | | HADOOP-17204 | Fix typo in Hadoop KMS document | Trivial | documentation, kms | Akira Ajisaka | Xieming Li | | YARN-10336 | RM page should throw exception when command injected in RM REST API to get applications | Major | webapp | Rajshree Mishra | Bilwa S T | | HDFS-15439 | Setting dfs.mover.retry.max.attempts to negative value will retry forever. | Major | balancer & mover | AMC-team | AMC-team | | YARN-10391 | module-gpu functionality is broken in container-executor | Major | nodemanager | Eric Badger | Eric Badger | | HDFS-15535 | RBF: Fix Namespace path to snapshot path resolution for snapshot API | Major | rbf | Ayush Saxena | Ayush Saxena | | HDFS-14504 | Rename with Snapshots does not honor quota limit | Major | snapshots | Shashikant Banerjee | Hemanth Boyina | | HADOOP-17209 | Erasure Coding: Native library memory leak | Major | native | Sean Chow | Sean Chow | | HADOOP-17220 | Upgrade slf4j to 1.7.30 ( To Address: CVE-2018-8088) | Major | build, common | Brahma Reddy Battula | Brahma Reddy Battula | | HDFS-14852 | Removing from LowRedundancyBlocks does not remove the block from all queues | Major | namenode | Hui Fei | Hui Fei | | HDFS-15536 | RBF: Clear Quota in Router was not consistent | Critical | rbf | Hemanth Boyina | Hemanth Boyina | | HDFS-15510 | RBF: Quota and Content Summary was not correct in Multiple Destinations | Critical | rbf | Hemanth Boyina | Hemanth Boyina | | HDFS-15540 | Directories protected from delete can still be moved to the trash | Major | namenode | Stephen ODonnell | Stephen ODonnell | | HDFS-15471 | TestHDFSContractMultipartUploader fails on trunk | Major | test | Ahmed Hussein | Steve Loughran | | HDFS-15290 | NPE in HttpServer during NameNode startup | Major | namenode | Konstantin Shvachko | Simbarashe Dzinamarira | | HADOOP-17240 | Fix wrong command line for setting up CentOS 8 | Minor | documentation | Masatake Iwasaki | Takeru Kuramoto | | YARN-10419 | Javadoc error in hadoop-yarn-server-common module | Major | build, documentation | Akira Ajisaka | Masatake Iwasaki | | YARN-10416 | Typos in YarnScheduler#allocate methods doc comment | Minor | docs | Wanqiang Ji | Siddharth Ahuja | | HADOOP-17245 | Add RootedOzFS AbstractFileSystem to core-default.xml | Major | fs | Bharat Viswanadham | Bharat Viswanadham | | HADOOP-17158 | Test timeout for ITestAbfsInputStreamStatistics#testReadAheadCounters | Major | fs/azure | Mehakmeet Singh | Mehakmeet Singh | | YARN-10397 | SchedulerRequest should be forwarded to scheduler if custom scheduler supports placement constraints | Minor | capacity scheduler | Bilwa S T | Bilwa S T | | HDFS-15573 | Only log warning if considerLoad and considerStorageType are both true | Major | hdfs | Stephen ODonnell | Stephen ODonnell | | YARN-10430 | Log improvements in NodeStatusUpdaterImpl | Minor | nodemanager | Bilwa S T | Bilwa S T | | HADOOP-17262 | Switch to Yetus main branch | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17246 | Fix build the hadoop-build Docker image failed | Major | build | Wanqiang Ji | Wanqiang Ji | | HDFS-15438 | Setting" }, { "data": "= 0 will fail the block copy | Major | balancer & mover | AMC-team | AMC-team | | HADOOP-17203 | Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS | Major | fs/azure | Mehakmeet Singh | Thomas Marqardt | | MAPREDUCE-7294 | Only application master should upload resource to Yarn Shared Cache | Major | mrv2 | zhenzhao wang | zhenzhao wang | | HADOOP-17277 | Correct spelling errors for separator | Trivial | common | Hui Fei | Hui Fei | | YARN-10443 | Document options of logs CLI | Major | yarn | Adam Antal | Ankit Kumar | | YARN-10438 | Handle null containerId in ClientRMService#getContainerReport() | Major | resourcemanager | Raghvendra Singh | Shubham Gupta | | HADOOP-17286 | Upgrade to jQuery 3.5.1 in hadoop-yarn-common | Major | build, common | Wei-Chiu Chuang | Aryan Gupta | | HDFS-15591 | RBF: Fix webHdfs file display error | Major | rbf, webhdfs | Zhaohui Wang | Zhaohui Wang | | MAPREDUCE-7289 | Fix wrong comment in LongLong.java | Trivial | documentation, examples | Akira Ajisaka | Wanqiang Ji | | HDFS-15600 | TestRouterQuota fails in trunk | Major | rbf | Ayush Saxena | huangtianhua | | YARN-9809 | NMs should supply a health status when registering with RM | Major | nodemanager, resourcemanager | Eric Badger | Eric Badger | | YARN-10447 | TestLeafQueue: ActivitiesManager thread might interfere with ongoing stubbing | Major | test | Peter Bacsko | Peter Bacsko | | HADOOP-17297 | Use Yetus before YETUS-994 to enable adding comments to GitHub | Major | build | Akira Ajisaka | Akira Ajisaka | | HDFS-15458 | TestNameNodeRetryCacheMetrics fails intermittently | Major | hdfs, namenode | Ahmed Hussein | Hui Fei | | HADOOP-17294 | Fix typos existance to existence | Trivial | common | Ikko Ashimine | Ikko Ashimine | | HDFS-15543 | RBF: Write Should allow, when a subcluster is unavailable for RANDOM mount points with fault Tolerance enabled. | Major | rbf | Harshakiran Reddy | Hemanth Boyina | | YARN-10393 | MR job live lock caused by completed state container leak in heartbeat between node manager and RM | Major | nodemanager, yarn | zhenzhao wang | Jim Brennan | | HDFS-15253 | Set default throttle value on dfs.image.transfer.bandwidthPerSec | Major | namenode | Karthik Palanisamy | Karthik Palanisamy | | HDFS-15610 | Reduce datanode upgrade/hardlink thread | Major | datanode | Karthik Palanisamy | Karthik Palanisamy | | YARN-10455 | TestNMProxy.testNMProxyRPCRetry is not consistent | Major | test | Ahmed Hussein | Ahmed Hussein | | HDFS-15456 | TestExternalStoragePolicySatisfier fails intermittently | Major | test | Ahmed Hussein | Leon Gao | | HADOOP-17223 | update org.apache.httpcomponents:httpclient to 4.5.13 and httpcore to 4.4.13 | Blocker | common | Pranav Bheda | Pranav Bheda | | YARN-10448 | SLS should set default user to handle SYNTH format | Major | scheduler-load-simulator | Qi Zhu | Qi Zhu | | HDFS-15628 | HttpFS server throws NPE if a file is a symlink | Major | fs, httpfs | Ahmed Hussein | Ahmed Hussein | | HDFS-15627 | Audit log deletes before collecting blocks | Major | logging, namenode | Ahmed Hussein | Ahmed Hussein | | HADOOP-17309 | Javadoc warnings and errors are ignored in the precommit jobs | Major | build, documentation | Akira Ajisaka | Akira Ajisaka | | HDFS-14383 | Compute datanode load based on StoragePolicy | Major | hdfs, namenode | Karthik Palanisamy | Ayush Saxena | | HADOOP-17310 | Touch command with -c option is broken | Major | command | Ayush Saxena | Ayush Saxena | | HADOOP-17298 | Backslash in username causes build failure in the environment started by" }, { "data": "| Minor | build | Takeru Kuramoto | Takeru Kuramoto | | HDFS-15639 | [JDK 11] Fix Javadoc errors in hadoop-hdfs-client | Major | documentation | Takanobu Asanuma | Takanobu Asanuma | | HADOOP-17315 | Use shaded guava in ClientCache.java | Minor | build | Akira Ajisaka | Akira Ajisaka | | YARN-10453 | Add partition resource info to get-node-labels and label-mappings api responses | Major | yarn | Akhil PB | Akhil PB | | HDFS-15622 | Deleted blocks linger in the replications queue | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | HDFS-15641 | DataNode could meet deadlock if invoke refreshNameNode | Critical | datanode | Hongbing Wang | Hongbing Wang | | HADOOP-17328 | LazyPersist Overwrite fails in direct write mode | Major | command | Ayush Saxena | Ayush Saxena | | HDFS-15580 | [JDK 12] DFSTestUtil#addDataNodeLayoutVersion fails | Major | test | Akira Ajisaka | Akira Ajisaka | | MAPREDUCE-7302 | Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail | Major | test | Peter Bacsko | Peter Bacsko | | HDFS-15644 | Failed volumes can cause DNs to stop block reporting | Major | block placement, datanode | Ahmed Hussein | Ahmed Hussein | | HADOOP-17236 | Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640 | Major | build, command | Brahma Reddy Battula | Brahma Reddy Battula | | YARN-10467 | ContainerIdPBImpl objects can be leaked in RMNodeImpl.completedContainers | Major | resourcemanager | Haibo Chen | Haibo Chen | | HADOOP-17329 | mvn site commands fails due to MetricsSystemImpl changes | Major | build, common | Xiaoqiao He | Xiaoqiao He | | YARN-10442 | RM should make sure node label file highly available | Major | resourcemanager | Surendra Singh Lilhore | Surendra Singh Lilhore | | HDFS-15651 | Client could not obtain block when DN CommandProcessingThread exit | Major | datanode | Yiqun Lin | Mingxiang Li | | HADOOP-17341 | Upgrade commons-codec to 1.15 | Minor | build, common | Dongjoon Hyun | Dongjoon Hyun | | HADOOP-17340 | TestLdapGroupsMapping failing -string mismatch in exception validation | Major | test | Steve Loughran | Steve Loughran | | HDFS-15667 | Audit log record the unexpected allowed result when delete called | Major | hdfs | Baolong Mao | Baolong Mao | | HADOOP-17352 | Update PATCHNAMINGRULE in the personality file | Minor | build | Akira Ajisaka | Akira Ajisaka | | YARN-10458 | Hive On Tez queries fails upon submission to dynamically created pools | Major | resourcemanager | Anand Srinivasan | Peter Bacsko | | HDFS-15485 | Fix outdated properties of JournalNode when performing rollback | Minor | journal-node | Deegue | Deegue | | HADOOP-17096 | Fix ZStandardCompressor input buffer offset | Major | io | Stephen Jung (Stripe) | Stephen Jung (Stripe) | | HADOOP-17324 | Dont relocate org.bouncycastle in shaded client jars | Critical | build | Chao Sun | Chao Sun | | HADOOP-17373 | hadoop-client-integration-tests doesnt work when building with skipShade | Major | build | Chao Sun | Chao Sun | | HADOOP-17358 | Improve excessive reloading of Configurations | Major | conf | Ahmed Hussein | Ahmed Hussein | | HDFS-15545 | (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire | Major | webhdfs | Issac Buenrostro | Issac Buenrostro | | HDFS-15538 | Fix the documentation for dfs.namenode.replication.max-streams in" }, { "data": "| Major | documentation | Xieming Li | Xieming Li | | HADOOP-17362 | Doing hadoop ls on Har file triggers too many RPC calls | Major | fs | Ahmed Hussein | Ahmed Hussein | | YARN-10485 | TimelineConnector swallows InterruptedException | Major | yarn-client | Ahmed Hussein | Ahmed Hussein | | HADOOP-17360 | Log the remote address for authentication success | Minor | ipc | Ahmed Hussein | Ahmed Hussein | | HDFS-15685 | [JDK 14] TestConfiguredFailoverProxyProvider#testResolveDomainNameUsingDNS fails | Major | test | Akira Ajisaka | Akira Ajisaka | | MAPREDUCE-7305 | [JDK 11] TestMRJobsWithProfiler fails | Major | test | Akira Ajisaka | Akira Ajisaka | | YARN-10396 | Max applications calculation per queue disregards queue level settings in absolute mode | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HADOOP-17390 | Skip license check on lz4 code files | Major | build | Zhihua Deng | Zhihua Deng | | HADOOP-17346 | Fair call queue is defeated by abusive service principals | Major | common, ipc | Ahmed Hussein | Ahmed Hussein | | YARN-10470 | When building new web ui with root user, the bower install should support it. | Major | build, yarn-ui-v2 | Qi Zhu | Qi Zhu | | YARN-10468 | TestNodeStatusUpdater does not handle early failure in threads | Major | nodemanager | Ahmed Hussein | Ahmed Hussein | | YARN-10488 | Several typos in package: org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair | Minor | fairscheduler | Szilard Nemeth | Ankit Kumar | | HDFS-15698 | Fix the typo of dfshealth.html after HDFS-15358 | Trivial | namenode | Hui Fei | Hui Fei | | YARN-10498 | Fix Yarn CapacityScheduler Markdown document | Trivial | documentation | zhaoshengjie | zhaoshengjie | | HADOOP-17399 | lz4 sources missing for native Visual Studio project | Major | native | Gautham Banasandra | Gautham Banasandra | | HDFS-15695 | NN should not let the balancer run in safemode | Major | namenode | Ahmed Hussein | Ahmed Hussein | | YARN-9883 | Reshape SchedulerHealth class | Minor | resourcemanager, yarn | Adam Antal | D M Murali Krishna Reddy | | YARN-10511 | Update yarn.nodemanager.env-whitelist value in docs | Minor | documentation | Andrea Scarpino | Andrea Scarpino | | HADOOP-16881 | KerberosAuthentication does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns | Major | auth, security | Prabhu Joseph | Attila Magyar | | HADOOP-16080 | hadoop-aws does not work with hadoop-client-api | Major | fs/s3 | Keith Turner | Chao Sun | | HDFS-15660 | StorageTypeProto is not compatiable between 3.x and 2.6 | Major | build | Ryan Wu | Ryan Wu | | HDFS-15707 | NNTop counts dont add up as expected | Major | hdfs, metrics, namenode | Ahmed Hussein | Ahmed Hussein | | HDFS-15709 | EC: Socket file descriptor leak in StripedBlockChecksumReconstructor | Major | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka | | YARN-10495 | make the rpath of container-executor configurable | Major | yarn | angerszhu | angerszhu | | HDFS-15240 | Erasure Coding: dirty buffer causes reconstruction block error | Blocker | datanode, erasure-coding | HuangTao | HuangTao | | YARN-10491 | Fix deprecation warnings in SLSWebApp.java | Minor | build | Akira Ajisaka | Ankit Kumar | | HADOOP-13571 | ServerSocketUtil.getPort() should use loopback address, not" }, { "data": "| Major | net | Eric Badger | Eric Badger | | HDFS-15725 | Lease Recovery never completes for a committed block which the DNs never finalize | Major | namenode | Stephen ODonnell | Stephen ODonnell | | HDFS-15170 | EC: Block gets marked as CORRUPT in case of failover and pipeline recovery | Critical | erasure-coding | Ayush Saxena | Ayush Saxena | | YARN-10499 | TestRouterWebServicesREST fails | Major | test | Akira Ajisaka | Akira Ajisaka | | YARN-10536 | Client in distributedShell swallows interrupt exceptions | Major | client, distributed-shell | Ahmed Hussein | Ahmed Hussein | | HDFS-15116 | Correct spelling of comments for NNStorage.setRestoreFailedStorage | Trivial | namanode | Xudong Cao | Xudong Cao | | HDFS-15743 | Fix -Pdist build failure of hadoop-hdfs-native-client | Major | build | Masatake Iwasaki | Masatake Iwasaki | | HDFS-15739 | Missing Javadoc for a param in DFSNetworkTopology | Minor | hdfs | zhanghuazong | zhanghuazong | | YARN-10334 | TestDistributedShell leaks resources on timeout/failure | Major | distributed-shell, test, yarn | Ahmed Hussein | Ahmed Hussein | | YARN-10558 | Fix failure of TestDistributedShell#testDSShellWithOpportunisticContainers | Minor | test | Masatake Iwasaki | Masatake Iwasaki | | HDFS-15719 | [Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout | Critical | hdfs, journal-node, namanode | Wei-Chiu Chuang | Wei-Chiu Chuang | | YARN-10560 | Upgrade node.js to 10.23.1 and yarn to 1.22.5 in Web UI v2 | Major | webapp, yarn-ui-v2 | Akira Ajisaka | Akira Ajisaka | | HADOOP-17444 | ADLFS: Update SDK version from 2.3.6 to 2.3.9 | Minor | fs/adl | Bilahari T H | Bilahari T H | | YARN-10528 | maxAMShare should only be accepted for leaf queues, not parent queues | Major | fairscheduler | Siddharth Ahuja | Siddharth Ahuja | | YARN-10553 | Refactor TestDistributedShell | Major | distributed-shell, test | Ahmed Hussein | Ahmed Hussein | | HADOOP-17438 | Increase docker memory limit in Jenkins | Major | build, scripts, test, yetus | Ahmed Hussein | Ahmed Hussein | | MAPREDUCE-7310 | Clear the fileMap in JHEventHandlerForSigtermTest | Minor | test | Zhengxi Li | Zhengxi Li | | YARN-7200 | SLS generates a realtimetrack.json file but that file is missing the closing ] | Minor | scheduler-load-simulator | Grant Sohn | Agshin Kazimli | | HADOOP-16947 | Stale record should be remove when MutableRollingAverages generating aggregate data. | Major | metrics | Haibin Huang | Haibin Huang | | YARN-10515 | Fix flaky test TestCapacitySchedulerAutoQueueCreation.testDynamicAutoQueueCreationWithTags | Major | test | Peter Bacsko | Peter Bacsko | | HADOOP-17224 | Install Intel ISA-L library in Dockerfile | Blocker | build | Takanobu Asanuma | Takanobu Asanuma | | HDFS-15632 | AbstractContractDeleteTest should set recursive parameter to true for recursive test cases. | Major | test | Konstantin Shvachko | Anton Kutuzov | | HADOOP-17496 | Build failure due to python2.7 deprecation by pip | Blocker | build | Gautham Banasandra | Gautham Banasandra | | HDFS-15661 | The DeadNodeDetector shouldnt be shared by different DFSClients. | Major | datanode | Jinglun | Jinglun | | HDFS-10498 | Intermittent test failure org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength | Major | hdfs, snapshots | Hanisha Koneru | Jim Brennan | | HADOOP-17506 | Fix typo in BUILDING.txt | Trivial | documentation | Gautham Banasandra | Gautham Banasandra | | HADOOP-17507 | Add build instructions for installing GCC 9 and CMake 3.19 | Trivial | documentation | Gautham Banasandra | Gautham Banasandra | | HDFS-15791 | Possible Resource Leak in FSImageFormatProtobuf | Major | namenode | Narges Shadab | Narges Shadab | | HDFS-15795 | EC: Wrong checksum when reconstruction was failed by exception | Major | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka | | HDFS-15779 | EC: fix NPE caused by StripedWriter.clearBuffers during reconstruct block | Major | erasure-coding | Hongbing Wang | Hongbing Wang | | YARN-10611 | Fix that shaded should be used for google guava imports in" }, { "data": "| Major | test | Qi Zhu | Qi Zhu | | HDFS-15798 | EC: Reconstruct task failed, and It would be XmitsInProgress of DN has negative number | Major | erasure-coding | Haiyang Hu | Haiyang Hu | | HADOOP-17513 | Checkstyle IllegalImport does not catch guava imports | Major | build, common | Ahmed Hussein | Ahmed Hussein | | YARN-10428 | Zombie applications in the YARN queue using FAIR + sizebasedweight | Critical | capacityscheduler | Guang Yang | Andras Gyori | | YARN-10607 | User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH | Major | container, nodeattibute | Eric Badger | Eric Badger | | HDFS-15792 | ClasscastException while loading FSImage | Major | nn | Renukaprasad C | Renukaprasad C | | YARN-10593 | Fix incorrect string comparison in GpuDiscoverer | Major | resourcemanager | Peter Bacsko | Peter Bacsko | | HADOOP-17516 | Upgrade ant to 1.10.9 | Major | common | Akira Ajisaka | Akira Ajisaka | | YARN-10618 | RM UI2 Application page shows the AM preempted containers instead of the nonAM ones | Minor | yarn-ui-v2 | Benjamin Teke | Benjamin Teke | | YARN-10500 | TestDelegationTokenRenewer fails intermittently | Major | test | Akira Ajisaka | Masatake Iwasaki | | HDFS-15839 | RBF: Cannot get method setBalancerBandwidth on Router Client | Major | rbf | Yang Yun | Yang Yun | | HDFS-15806 | DeadNodeDetector should close all the threads when it is closed. | Major | datanode | Jinglun | Jinglun | | HADOOP-17534 | Upgrade Jackson databind to 2.10.5.1 | Major | build | Adam Roberts | Akira Ajisaka | | MAPREDUCE-7323 | Remove jobhistorysummary.py | Major | examples | Akira Ajisaka | Akira Ajisaka | | YARN-10647 | Fix TestRMNodeLabelsManager failed after YARN-10501. | Major | test | Qi Zhu | Qi Zhu | | HADOOP-17510 | Hadoop prints sensitive Cookie information. | Major | security | Renukaprasad C | Renukaprasad C | | HDFS-15422 | Reported IBR is partially replaced with stored info when queuing. | Critical | namenode | Kihwal Lee | Stephen ODonnell | | YARN-10651 | CapacityScheduler crashed with NPE in AbstractYarnScheduler.updateNodeResource() | Major | capacity scheduler | Haibo Chen | Haibo Chen | | MAPREDUCE-7320 | ClusterMapReduceTestCase does not clean directories | Major | test | Ahmed Hussein | Ahmed Hussein | | YARN-10656 | Parsing error in CapacityScheduler.md | Major | documentation | Akira Ajisaka | Akira Ajisaka | | HDFS-14013 | Skip any credentials stored in HDFS when starting ZKFC | Major | hdfs | Krzysztof Adamski | Stephen ODonnell | | HDFS-15849 | ExpiredHeartbeats metric should be of Type.COUNTER | Major | metrics | Konstantin Shvachko | Qi Zhu | | HADOOP-17560 | Fix some spelling errors | Trivial | documentation | jiaguodong | jiaguodong | | YARN-10649 | Fix" }, { "data": "leak | Major | resourcemanager | Max Xie | Max Xie | | YARN-10672 | All testcases in TestReservations are flaky | Major | reservation system, test | Szilard Nemeth | Szilard Nemeth | | HADOOP-17557 | skip-dir option is not processed by Yetus | Major | build, precommit, yetus | Ahmed Hussein | Ahmed Hussein | | YARN-10676 | Improve code quality in TestTimelineAuthenticationFilterForV1 | Minor | test, timelineservice | Szilard Nemeth | Szilard Nemeth | | YARN-10675 | Consolidate YARN-10672 and YARN-10447 | Major | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-10678 | Try blocks without catch blocks in SLS scheduler classes can swallow other exceptions | Major | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-10677 | Logger of SLSFairScheduler is provided with the wrong class | Major | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-10681 | Fix assertion failure message in BaseSLSRunnerTest | Trivial | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-10679 | Better logging of uncaught exceptions throughout SLS | Major | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | YARN-10671 | Fix Typo in TestSchedulingRequestContainerAllocation | Minor | capacity scheduler, test | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | HDFS-15875 | Check whether file is being truncated before truncate | Major | datanode, fs, namanode | Hui Fei | Hui Fei | | HADOOP-17573 | Fix compilation error of OBSFileSystem in trunk | Major | fs/huawei | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-17582 | Replace GitHub App Token with GitHub OAuth token | Major | build | Akira Ajisaka | Akira Ajisaka | | YARN-10687 | Add option to disable/enable free disk space checking and percentage checking for full and not-full disks | Major | nodemanager | Qi Zhu | Qi Zhu | | HADOOP-17581 | Fix reference to LOG is ambiguous after HADOOP-17482 | Major | test | Xiaoyu Yao | Xiaoyu Yao | | HADOOP-17586 | Upgrade org.codehaus.woodstox:stax2-api to 4.2.1 | Major | build, common | Ayush Saxena | Ayush Saxena | | HDFS-15816 | Fix shouldAvoidStaleDataNodesForWrite returns when no stale node in cluster. | Minor | block placement | Yang Yun | Yang Yun | | HADOOP-17585 | Correct timestamp format in the docs for the touch command | Major | common | Stephen ODonnell | Stephen ODonnell | | HDFS-15809 | DeadNodeDetector doesnt remove live nodes from dead node set. | Major | datanode | Jinglun | Jinglun | | HADOOP-17532 | Yarn Job execution get failed when LZ4 Compression Codec is used | Major | common | Bhavik Patel | Bhavik Patel | | YARN-10588 | Percentage of queue and cluster is zero in WebUI | Major | resourcemanager | Bilwa S T | Bilwa S T | | YARN-10682 | The scheduler monitor policies conf should trim values separated by comma | Major | capacity scheduler | Qi Zhu | Qi Zhu | | MAPREDUCE-7322 | revisiting TestMRIntermediateDataEncryption | Major | job submission, security, test | Ahmed Hussein | Ahmed Hussein | | YARN-10652 | Capacity Scheduler fails to handle user weights for a user that has a . (dot) in it | Major | capacity scheduler | Siddharth Ahuja | Siddharth Ahuja | | HADOOP-17578 | Improve UGI debug log to help troubleshooting TokenCache related issues | Major | security | Xiaoyu Yao | Xiaoyu Yao | | YARN-10685 | Fix typos in AbstractCSQueue | Major | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10703 | Fix potential null pointer error of gpuNodeResourceUpdateHandler in NodeResourceMonitorImpl. | Major | nodemanager | Qi Zhu | Qi Zhu | | HDFS-15868 | Possible Resource Leak in EditLogFileOutputStream | Major | namanode | Narges Shadab | Narges Shadab | | HADOOP-17592 | Fix the wrong CIDR range example in Proxy User documentation | Minor | documentation | Kwangsun Noh | Kwangsun Noh | | YARN-10706 | Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 | Major | buid | Mingliang Liu | Mingliang Liu | | MAPREDUCE-7325 | Intermediate data encryption is broken in LocalJobRunner | Major | job submission, security | Ahmed Hussein | Ahmed Hussein | | HADOOP-17598 | Fix java doc issue introduced by HADOOP-17578 | Minor | documentation | Xiaoyu Yao | Xiaoyu Yao | | HDFS-15908 | Possible Resource Leak in" }, { "data": "| Major | journal-node | Narges Shadab | Narges Shadab | | HDFS-15910 | Replace bzero with explicit_bzero for better safety | Critical | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-10697 | Resources are displayed in bytes in UI for schedulers other than capacity | Major | ui, webapp | Bilwa S T | Bilwa S T | | HDFS-15918 | Replace RANDpseudobytes in sasldigestmd5.cc | Critical | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17602 | Upgrade JUnit to 4.13.1 | Major | build, security, test | Ahmed Hussein | Ahmed Hussein | | HDFS-15922 | Use memcpy for copying non-null terminated string in jni_helper.c | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-15900 | RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode | Major | rbf | Harunobu Daikoku | Harunobu Daikoku | | HDFS-15935 | Use memcpy for copying non-null terminated string | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-10501 | Cant remove all node labels after add node label without nodemanager port | Critical | yarn | caozhiqiang | caozhiqiang | | YARN-10437 | Destroy yarn service if any YarnException occurs during submitApp | Minor | yarn-native-services | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | YARN-10439 | Yarn Service AM listens on all IPs on the machine | Minor | security, yarn-native-services | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | YARN-10441 | Add support for hadoop.http.rmwebapp.scheduler.page.class | Major | scheduler | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | YARN-10466 | Fix NullPointerException in yarn-services Component.java | Minor | yarn-service | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | YARN-10716 | Fix typo in ContainerRuntime | Trivial | documentation | Wanqiang Ji | xishuhai | | HDFS-15928 | Replace RANDpseudobytes in rpc_engine.cc | Critical | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-15929 | Replace RANDpseudobytes in util.cc | Critical | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-15927 | Catch polymorphic type by reference | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-10718 | Fix CapacityScheduler#initScheduler log error. | Major | capacity scheduler | Qi Zhu | Qi Zhu | | HDFS-15494 | TestReplicaCachingGetSpaceUsed#testReplicaCachingGetSpaceUsedByRBWReplica Fails on Windows | Major | datanode, test | Ravuri Sushma sree | Ravuri Sushma sree | | HDFS-15222 | Correct the hdfs fsck -list-corruptfileblocks command output | Minor | hdfs, tools | Souryakanta Dwivedy | Ravuri Sushma sree | | HADOOP-17610 | DelegationTokenAuthenticator prints token information | Major | security | Ravuri Sushma sree | Ravuri Sushma sree | | HADOOP-17587 | Kinit with keytab should not display the keytab files full path in any logs | Major | security | Ravuri Sushma sree | Ravuri Sushma sree | | HDFS-15944 | Prevent truncation by snprintf | Critical | fuse-dfs, libhdfs | Gautham Banasandra | Gautham Banasandra | | HDFS-15930 | Fix some @param errors in DirectoryScanner. | Minor | datanode | Qi Zhu | Qi Zhu | | HADOOP-17619 | Fix DelegationTokenRenewer#updateRenewalTime java doc error. | Minor | documentation | Qi Zhu | Qi Zhu | | HDFS-15950 | Remove unused hdfs.proto import | Major | hdfs-client | Gautham Banasandra | Gautham Banasandra | | HDFS-15947 | Replace deprecated protobuf APIs | Critical | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-15949 | Fix integer overflow | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17588" }, { "data": "CryptoInputStream#close() should be synchronized | Major | crypto | Renukaprasad C | Renukaprasad C | | HADOOP-17621 | hadoop-auth to remove jetty-server dependency | Major | auth | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-15948 | Fix test4tests for libhdfspp | Critical | build, libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-17617 | Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file | Major | documentation | Ravuri Sushma sree | Ravuri Sushma sree | | MAPREDUCE-7270 | TestHistoryViewerPrinter could be failed when the locale isnt English. | Minor | test | Sungpeo Kook | Sungpeo Kook | | HDFS-15916 | DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff | Major | distcp | Srinivasu Majeti | Ayush Saxena | | MAPREDUCE-7329 | HadoopPipes task may fail when linux kernel version change from 3.x to 4.x | Major | pipes | chaoli | chaoli | | MAPREDUCE-7334 | TestJobEndNotifier fails | Major | test | Akira Ajisaka | Akira Ajisaka | | HADOOP-17608 | Fix TestKMS failure | Major | kms | Akira Ajisaka | Akira Ajisaka | | YARN-10736 | Fix GetApplicationsRequest JavaDoc | Major | documentation | Miklos Gergely | Miklos Gergely | | HDFS-15423 | RBF: WebHDFS create shouldnt choose DN from all sub-clusters | Major | rbf, webhdfs | Chao Sun | Fengnan Li | | HDFS-15977 | Call explicit_bzero only if it is available | Major | libhdfs++ | Akira Ajisaka | Akira Ajisaka | | HDFS-15963 | Unreleased volume references cause an infinite loop | Critical | datanode | Shuyan Zhang | Shuyan Zhang | | HADOOP-17642 | Remove appender EventCounter to avoid instantiation | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17635 | Update the ubuntu version in the build instruction | Major | build, documentation | Akira Ajisaka | Masatake Iwasaki | | YARN-10460 | Upgrading to JUnit 4.13 causes tests in TestNodeStatusUpdater to fail | Major | nodemanager, test | Peter Bacsko | Peter Bacsko | | HADOOP-17505 | public interface GroupMappingServiceProvider needs default impl for getGroupsSet() | Major | security | Vinayakumar B | Vinayakumar B | | HDFS-15974 | RBF: Unable to display the datanode UI of the router | Major | rbf, ui | Xiangyi Zhu | Xiangyi Zhu | | HADOOP-17655 | Upgrade Jetty to 9.4.40 | Blocker | common | Akira Ajisaka | Akira Ajisaka | | YARN-10705 | Misleading DEBUG log for container assignment needs to be removed when the container is actually reserved, not assigned in FairScheduler | Minor | yarn | Siddharth Ahuja | Siddharth Ahuja | | YARN-10749 | Cant remove all node labels after add node label without nodemanager port, broken by YARN-10647 | Major | nodelabel | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | HDFS-15566 | NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0 | Blocker | namanode | Brahma Reddy Battula | Brahma Reddy Battula | | HADOOP-17650 | Fails to build using Maven" }, { "data": "| Major | build | Wei-Chiu Chuang | Viraj Jasani | | HDFS-15621 | Datanode DirectoryScanner uses excessive memory | Major | datanode | Stephen ODonnell | Stephen ODonnell | | HADOOP-17674 | Use spotbugs-maven-plugin in hadoop-huaweicloud | Major | build | Akira Ajisaka | Akira Ajisaka | | HDFS-15624 | Fix the SetQuotaByStorageTypeOp problem after updating hadoop | Major | hdfs | YaYun Wang | huangtianhua | | HDFS-15561 | RBF: Fix NullPointException when start dfsrouter | Major | rbf | Xie Lei | Fengnan Li | | HDFS-15865 | Interrupt DataStreamer thread | Minor | datanode | Karthik Palanisamy | Karthik Palanisamy | | HDFS-15810 | RBF: RBFMetricss TotalCapacity out of bounds | Major | rbf | Weison Wei | Fengnan Li | | HADOOP-17657 | SequenceFile.Writer should implement StreamCapabilities | Major | io | Kishen Das | Kishen Das | | YARN-10756 | Remove additional junit 4.11 dependency from javadoc | Major | build, test, timelineservice | ANANDA G B | Akira Ajisaka | | HADOOP-17375 | Fix the error of TestDynamometerInfra | Major | test | Akira Ajisaka | Takanobu Asanuma | | HDFS-16001 | TestOfflineEditsViewer.testStored() fails reading negative value of FSEditLogOpCodes | Blocker | hdfs | Konstantin Shvachko | Akira Ajisaka | | HADOOP-17686 | Avoid potential NPE by using Path#getParentPath API in hadoop-huaweicloud | Major | fs | Error Reporter | Viraj Jasani | | HADOOP-17689 | Avoid Potential NPE in org.apache.hadoop.fs | Major | fs | Error Reporter | Viraj Jasani | | HDFS-15988 | Stabilise HDFS Pre-Commit | Major | build, test | Ayush Saxena | Ayush Saxena | | HADOOP-17142 | Fix outdated properties of journal node when perform rollback | Minor | . | Deegue | Deegue | | HADOOP-17107 | hadoop-azure parallel tests not working on recent JDKs | Major | build, fs/azure | Steve Loughran | Steve Loughran | | YARN-10555 | Missing access check before getAppAttempts | Critical | webapp | lujie | lujie | | HADOOP-17703 | checkcompatibility.py errors out when specifying annotations | Major | scripts | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-17699 | Remove hardcoded SunX509 usage from SSLFactory | Major | common | Xiaoyu Yao | Xiaoyu Yao | | YARN-10777 | Bump node-sass from 4.13.0 to 4.14.1 in /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp | Major | yarn-ui-v2 | Wei-Chiu Chuang | Wei-Chiu Chuang | | YARN-10701 | The yarn.resource-types should support multi types without trimmed. | Major | resourcemanager | Qi Zhu | Qi Zhu | | HADOOP-14922 | Build of Mapreduce Native Task module fails with unknown opcode bswap | Major | common | Anup Halarnkar | Anup Halarnkar | | YARN-10766 | [UI2] Bump moment-timezone to 0.5.33 | Major | yarn, yarn-ui-v2 | Andras Gyori | Andras Gyori | | HADOOP-17718 | Explicitly set locale in the Dockerfile | Blocker | build | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-17700 | ExitUtil#halt info log should log HaltException | Major | common | Viraj Jasani | Viraj Jasani | | YARN-7769 | FS QueueManager should not create default queue at init | Major | fairscheduler | Wilfred Spiegelenburg | Benjamin Teke | | YARN-10770 | container-executor permission is wrong in SecureContainer.md | Major | documentation | Akira Ajisaka | Siddharth Ahuja | | YARN-10691 | DominantResourceCalculator isInvalidDivisor should consider only countable resource types | Major | yarn-common | Bilwa S T | Bilwa S T | | HDFS-16031 | Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap | Major | server | Narges Shadab | Narges Shadab | | MAPREDUCE-7348 | TestFrameworkUploader#testNativeIO fails | Major | test | Akira Ajisaka | Akira Ajisaka | | HDFS-15915 | Race condition with async edits logging due to updating txId outside of the namesystem log | Major | hdfs, namenode | Konstantin Shvachko | Konstantin Shvachko | | HDFS-16046 | TestBalanceProcedureScheduler and TestDistCpProcedure timeout | Major | rbf, test | Akira Ajisaka | Akira Ajisaka | | HADOOP-17723 | [build] fix the Dockerfile for ARM | Blocker | build | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-16051 | Misspelt words in" }, { "data": "line 881 and line 885 | Trivial | datanode | Ning Sheng | Ning Sheng | | HDFS-15998 | Fix NullPointException In listOpenFiles | Major | namanode | Haiyang Hu | Haiyang Hu | | YARN-10797 | Logging parameter issues in scheduler package | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-10796 | Capacity Scheduler: dynamic queue cannot scale out properly if its capacity is 0% | Major | capacity scheduler, capacityscheduler | Peter Bacsko | Peter Bacsko | | HDFS-16050 | Some dynamometer tests fail | Major | test | Akira Ajisaka | Akira Ajisaka | | HADOOP-17631 | Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true | Minor | common | Steve Loughran | Steve Loughran | | YARN-10809 | testWithHbaseConfAtHdfsFileSystem consistently failing | Major | test | Viraj Jasani | Viraj Jasani | | HADOOP-17750 | Fix asf license errors in newly added files by HADOOP-17727 | Major | build | Takanobu Asanuma | Takanobu Asanuma | | YARN-10803 | [JDK 11] TestRMFailoverProxyProvider and TestNoHaRMFailoverProxyProvider fails by ClassCastException | Major | test | Akira Ajisaka | Akira Ajisaka | | HDFS-16057 | Make sure the order for location in ENTERING_MAINTENANCE state | Minor | block placement | Tao Li | Tao Li | | YARN-10816 | Avoid doing delegation token ops when yarn.timeline-service.http-authentication.type=simple | Major | timelineclient | Tarun Parimi | Tarun Parimi | | HADOOP-17645 | Fix test failures in org.apache.hadoop.fs.azure.ITestOutputStreamSemantics | Minor | fs/azure | Anoop Sam John | Anoop Sam John | | HDFS-16055 | Quota is not preserved in snapshot INode | Major | hdfs | Siyao Meng | Siyao Meng | | HDFS-16068 | WebHdfsFileSystem has a possible connection leak in connection with HttpFS | Major | webhdfs | Takanobu Asanuma | Takanobu Asanuma | | YARN-10767 | Yarn Logs Command retrying on Standby RM for 30 times | Major | yarn-common | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | HDFS-15618 | Improve datanode shutdown latency | Major | datanode | Ahmed Hussein | Ahmed Hussein | | HADOOP-17760 | Delete hadoop.ssl.enabled and dfs.https.enable from docs and core-default.xml | Major | documentation | Takanobu Asanuma | Takanobu Asanuma | | HDFS-13671 | Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet | Major | namnode | Yiqun Lin | Haibin Huang | | HDFS-16061 | DFTestUtil.waitReplication can produce false positives | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | HDFS-14575 | LeaseRenewer#daemon threads leak in DFSClient | Major | dfsclient | Tao Yang | Renukaprasad C | | YARN-10826 | [UI2] Upgrade Node.js to at least v12.22.1 | Major | yarn-ui-v2 | Akira Ajisaka | Masatake Iwasaki | | HADOOP-17769 | Upgrade JUnit to" }, { "data": "| Major | build, test | Ahmed Hussein | Ahmed Hussein | | YARN-10824 | Title not set for JHS and NM webpages | Major | nodemanager | Rajshree Mishra | Bilwa S T | | HDFS-16092 | Avoid creating LayoutFlags redundant objects | Major | hdfs | Viraj Jasani | Viraj Jasani | | HDFS-16099 | Make bpServiceToActive to be volatile | Major | datanode | Shuyan Zhang | Shuyan Zhang | | HDFS-16109 | Fix flaky some unit tests since they offen timeout | Minor | test | Tao Li | Tao Li | | HDFS-16108 | Incorrect log placeholders used in JournalNodeSyncer | Minor | journal-node | Viraj Jasani | Viraj Jasani | | MAPREDUCE-7353 | Mapreduce job fails when NM is stopped | Major | task | Bilwa S T | Bilwa S T | | HDFS-16121 | Iterative snapshot diff report can generate duplicate records for creates, deletes and Renames | Major | snapshots | Srinivasu Majeti | Shashikant Banerjee | | HDFS-15796 | ConcurrentModificationException error happens on NameNode occasionally | Critical | hdfs | Daniel Ma | Daniel Ma | | HADOOP-17793 | Better token validation | Major | security | Artem Smotrakov | Artem Smotrakov | | HDFS-16042 | DatanodeAdminMonitor scan should be delay based | Major | datanode | Ahmed Hussein | Ahmed Hussein | | HDFS-16127 | Improper pipeline close recovery causes a permanent write failure or data loss. | Major | hdfs | Kihwal Lee | Kihwal Lee | | YARN-10855 | yarn logs cli fails to retrieve logs if any TFile is corrupt or empty | Major | yarn | Jim Brennan | Jim Brennan | | HDFS-16087 | RBF balance process is stuck at DisableWrite stage | Major | rbf | Eric Yin | Eric Yin | | HADOOP-17028 | ViewFS should initialize target filesystems lazily | Major | client-mounts, fs, viewfs | Uma Maheswara Rao G | Abhishek Das | | YARN-10630 | [UI2] Ambiguous queue name resolution | Major | yarn-ui-v2 | Andras Gyori | Andras Gyori | | HADOOP-17796 | Upgrade jetty version to 9.4.43 | Major | build, common | Wei-Chiu Chuang | Renukaprasad C | | YARN-10833 | RM logs endpoint vulnerable to clickjacking | Major | webapp | Benjamin Teke | Benjamin Teke | | HADOOP-17317 | [JDK 11] Upgrade dnsjava to remove illegal access warnings | Major | common | Akira Ajisaka | Akira Ajisaka | | HDFS-12920 | HDFS default value change (with adding time unit) breaks old version MR tarball work with Hadoop 3.x | Critical | configuration, hdfs | Junping Du | Akira Ajisaka | | HADOOP-17807 | Use separate source dir for platform builds | Critical | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16111 | Add a configuration to RoundRobinVolumeChoosingPolicy to avoid failed volumes at datanodes. | Major | datanode | Zhihai Xu | Zhihai Xu | | HDFS-16145 | CopyListing fails with FNF exception with snapshot diff | Major | distcp | Shashikant Banerjee | Shashikant Banerjee | | YARN-10813 | Set default capacity of root for node labels | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-16144 | Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions) | Major | namenode | Stephen ODonnell | Stephen ODonnell | | YARN-9551 | TestTimelineClientV2Impl.testSyncCall fails intermittently | Minor | ATSv2, test | Prabhu Joseph | Andras Gyori | | HDFS-15175 | Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog | Critical | namnode | Yicong Cai | Wan Chang | | YARN-10869 | CS considers only the default maximum-allocation-mb/vcore property as a maximum when it creates dynamic queues | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10789 | RM HA startup can fail due to race conditions in ZKConfigurationStore | Major | capacity scheduler | Tarun Parimi | Tarun Parimi | | HDFS-14529 | SetTimes to throw FileNotFoundException if inode is not found | Major | hdfs | Harshakiran Reddy | Wei-Chiu Chuang | | HADOOP-17812 | NPE in S3AInputStream read() after failure to reconnect to store | Major | fs/s3 | Bobby Wang | Bobby Wang | | YARN-6221 | Entities missing from ATS when summary log file info got returned to the ATS before the domain log | Critical | yarn | Sushmitha Sreenivasan | Xiaomin Zhang | | MAPREDUCE-7258 | HistoryServerRest.html#TaskCountersAPI, modify the jobTaskCounterss itemName from taskcounterGroup to taskCounterGroup. | Minor | documentation | jenny | jenny | | YARN-10878 | TestNMSimulator imports" }, { "data": "| Major | buid | Steve Loughran | Steve Loughran | | HADOOP-17816 | Run optional CI for changes in C | Critical | build | Gautham Banasandra | Gautham Banasandra | | HADOOP-17370 | Upgrade commons-compress to 1.21 | Major | common | Dongjoon Hyun | Akira Ajisaka | | HDFS-16151 | Improve the parameter comments related to ProtobufRpcEngine2#Server() | Minor | documentation | JiangHua Zhu | JiangHua Zhu | | HADOOP-17844 | Upgrade JSON smart to 2.4.7 | Major | build, common | Renukaprasad C | Renukaprasad C | | YARN-10873 | Graceful Decommission ignores launched containers and gets deactivated before timeout | Major | RM | Prabhu Joseph | Srinivas S T | | HDFS-16174 | Refactor TempFile and TempDir in libhdfs++ | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16177 | Bug fix for Util#receiveFile | Minor | hdfs-common | Tao Li | Tao Li | | HADOOP-17836 | Improve logging on ABFS error reporting | Minor | fs/azure | Steve Loughran | Steve Loughran | | HDFS-16178 | Make recursive rmdir in libhdfs++ cross platform | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-10814 | YARN shouldnt start with empty hadoop.http.authentication.signature.secret.file | Major | security | Benjamin Teke | Tamas Domok | | HADOOP-17858 | Avoid possible class loading deadlock with VerifierNone initialization | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17854 | Run junit in Jenkins only if surefire reports exist | Major | build | Gautham Banasandra | Gautham Banasandra | | HADOOP-17886 | Upgrade ant to 1.10.11 | Major | build, common | Ahmed Hussein | Ahmed Hussein | | HADOOP-17874 | ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-15129 | Datanode caches namenode DNS lookup failure and cannot startup | Minor | ipc | Karthik Palaniappan | Chris Nauroth | | HDFS-16199 | Resolve log placeholders in NamenodeBeanMetrics | Minor | metrics | Viraj Jasani | Viraj Jasani | | HADOOP-17870 | HTTP Filesystem to qualify paths in open()/getFileStatus() | Minor | fs | VinothKumar Raman | VinothKumar Raman | | HADOOP-17899 | Avoid using implicit dependency on junit-jupiter-api | Major | test | Masatake Iwasaki | Masatake Iwasaki | | YARN-10901 | Permission checking error on an existing directory in LogAggregationFileController#verifyAndCreateRemoteLogDir | Major | nodemanager | Tamas Domok | Tamas Domok | | HADOOP-17877 | BuiltInGzipCompressor header and trailer should not be static variables | Critical | compress, io | L. C. Hsieh | L. C. Hsieh | | HADOOP-17804 | Prometheus metrics only include the last set of labels | Major | common | Adam Binford | Adam Binford | | HDFS-16207 | Remove NN logs stack trace for non-existent xattr query | Major | namenode | Ahmed Hussein | Ahmed Hussein | | HADOOP-17901 | Performance degradation in" }, { "data": "after HADOOP-16951 | Critical | common | Peter Bacsko | Peter Bacsko | | HADOOP-17904 | Test Result Not Working In Jenkins Result | Major | build, test | Ayush Saxena | Ayush Saxena | | YARN-10903 | Too many Failed to accept allocation proposal because of wrong Headroom check for DRF | Major | capacityscheduler | jackwangcs | jackwangcs | | HDFS-16187 | SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing | Major | snapshots | Srinivasu Majeti | Shashikant Banerjee | | HDFS-16198 | Short circuit read leaks Slot objects when InvalidToken exception is thrown | Major | block placement | Eungsop Yoo | Eungsop Yoo | | YARN-10870 | Missing user filtering check -> yarn.webapp.filter-entity-list-by-user for RM Scheduler page | Major | yarn | Siddharth Ahuja | Gergely Pollk | | HADOOP-17891 | lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries | Major | build | L. C. Hsieh | L. C. Hsieh | | HADOOP-17907 | FileUtil#fullyDelete deletes contents of sym-linked directory when symlink cannot be deleted because of local fs fault | Major | fs | Weihao Zheng | Weihao Zheng | | YARN-10936 | Fix typo in LogAggregationFileController | Trivial | log-aggregation | Tamas Domok | Tibor Kovcs | | HADOOP-17919 | Fix command line example in Hadoop Cluster Setup documentation | Minor | documentation | Rintaro Ikeda | Rintaro Ikeda | | HADOOP-17902 | Fix Hadoop build on Debian 10 | Blocker | build | Gautham Banasandra | Gautham Banasandra | | YARN-10937 | Fix log message arguments in LogAggregationFileController | Trivial | . | Tamas Domok | Tibor Kovcs | | HDFS-16230 | Remove irrelevant trim() call in TestStorageRestore | Trivial | test | Thomas Leplus | Thomas Leplus | | HDFS-16129 | HttpFS signature secret file misusage | Major | httpfs | Tamas Domok | Tamas Domok | | HDFS-16205 | Make hdfs_allowSnapshot tool cross platform | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | YARN-9606 | Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient | Major | webservice | Bilwa S T | Bilwa S T | | HDFS-16233 | Do not use exception handler to implement copy-on-write for EnumCounters | Major | namenode | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-16235 | Deadlock in LeaseRenewer for static remove method | Major | hdfs | angerszhu | angerszhu | | HDFS-16236 | Example command for daemonlog is not correct | Minor | documentation | Renukaprasad C | Renukaprasad C | | HADOOP-17931 | Fix typos in usage message in winutils.exe | Minor | winutils | igo Goiri | Gautham Banasandra | | HDFS-16240 | Replace unshaded guava in HttpFSServerWebServer | Major | httpfs | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-17940 | Upgrade Kafka to 2.8.1 | Major | build, common | Takanobu Asanuma | Takanobu Asanuma | | YARN-10970 | Standby RM should expose prom endpoint | Major | resourcemanager | Max Xie | Max Xie | | YARN-10823 | Expose all node labels for root without explicit configurations | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-16181 | [SBN Read] Fix metric of RpcRequestCacheMissAmount cant display when tailEditLog form JN | Critical | journal-node, metrics | Zhaohui Wang | Zhaohui Wang | | HDFS-16254 | Cleanup protobuf on exit of hdfs_allowSnapshot | Major | hdfs-client, libhdfs++, tools | Gautham Banasandra | Gautham Banasandra | | HADOOP-17925 | BUILDING.txt should not encourage to activate docs profile on building binary artifacts | Minor | documentation | Rintaro Ikeda | Masatake Iwasaki | | HDFS-16244 | Add the necessary write lock in Checkpointer#doCheckpoint() | Major | namenode | JiangHua Zhu | JiangHua Zhu | | HADOOP-16532 | Fix TestViewFsTrash to use the correct homeDir. | Minor | test, viewfs | Steve Loughran | Xing Lin | | HDFS-16268 | Balancer stuck when moving striped blocks due to NPE | Major | balancer & mover, erasure-coding | Leon Gao | Leon Gao | | HDFS-16271 | RBF: NullPointerException when setQuota through routers with quota disabled | Major | rbf | Chengwei Wang | Chengwei Wang | | YARN-10976 | Fix resource leak due to" }, { "data": "| Minor | nodemanager | lujie | lujie | | HADOOP-17932 | Distcp file length comparison have no effect | Major | common, tools, tools/distcp | yinan zhan | yinan zhan | | HDFS-16272 | Int overflow in computing safe length during EC block recovery | Critical | ec, erasure-coding | daimin | daimin | | HADOOP-17908 | Add missing RELEASENOTES and CHANGELOG to upstream | Minor | documentation | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-17971 | Exclude IBM Java security classes from being shaded/relocated | Major | build | Nicholas Marion | Nicholas Marion | | HDFS-7612 | TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir | Major | test | Konstantin Shvachko | Michael Kuchenbecker | | HADOOP-17985 | Disable JIRA plugin for YETUS on Hadoop | Critical | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16269 | [Fix] Improve NNThroughputBenchmark#blockReport operation | Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu | | HDFS-16259 | Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger) | Major | namenode | Stephen ODonnell | Stephen ODonnell | | HDFS-16300 | Use libcrypto in Windows for libhdfspp | Blocker | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16304 | Locate OpenSSL libs for libhdfspp | Major | build, hdfs-client, native | Sangjin Lee | Gautham Banasandra | | HDFS-16311 | Metric metadataOperationRate calculation error in DataNodeVolumeMetrics | Major | datanode, metrics | Tao Li | Tao Li | | YARN-10996 | Fix race condition of User object acquisitions | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HADOOP-18006 | maven-enforcer-plugins execution of banned-illegal-imports gets overridden in child poms | Major | build | Viraj Jasani | Viraj Jasani | | HDFS-16182 | numOfReplicas is given the wrong value in BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with Heterogeneous Storage | Major | namanode | Max Xie | Max Xie | | HADOOP-17999 | No-op implementation of setWriteChecksum and setVerifyChecksum in ViewFileSystem | Major | viewfs | Abhishek Das | Abhishek Das | | HDFS-16329 | Fix log format for BlockManager | Minor | block placement | Tao Li | Tao Li | | HDFS-16330 | Fix incorrect placeholder for Exception logs in DiskBalancer | Major | datanode | Viraj Jasani | Viraj Jasani | | HDFS-16328 | Correct disk balancer param desc | Minor | documentation, hdfs | guophilipse | guophilipse | | HDFS-16334 | Correct NameNode ACL description | Minor | documentation | guophilipse | guophilipse | | HDFS-16343 | Add some debug logs when the dfsUsed are not used during Datanode startup | Major | datanode | Mukul Kumar Singh | Mukul Kumar Singh | | YARN-10760 | Number of allocated OPPORTUNISTIC containers can dip below 0 | Minor | resourcemanager | Andrew Chung | Andrew Chung | | HADOOP-18016 | Make certain methods LimitedPrivate in S3AUtils.java | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | YARN-10991 | Fix to ignore the grouping [] for resourcesStr in parseResourcesString method | Minor | distributed-shell | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-17975 | Fallback to simple auth does not work for a secondary DistributedFileSystem instance | Major | ipc | Istvn Fajth | Istvn Fajth | | HADOOP-17995 | Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson | Major | common | Haiyang Hu | Haiyang Hu | | HDFS-16350 | Datanode start time should be set after RPC server starts successfully | Minor | datanode | Viraj Jasani | Viraj Jasani | | YARN-11007 | Correct words in YARN documents | Minor | documentation | guophilipse | guophilipse | | YARN-10975" }, { "data": "EntityGroupFSTimelineStore#ActiveLogParser parses already processed files | Major | timelineserver | Prabhu Joseph | Ravuri Sushma sree | | HDFS-16361 | Fix log format for QueryCommand | Minor | command, diskbalancer | Tao Li | Tao Li | | HADOOP-18027 | Include static imports in the maven plugin rules | Major | build | Viraj Jasani | Viraj Jasani | | HDFS-16359 | RBF: RouterRpcServer#invokeAtAvailableNs does not take effect when retrying | Major | rbf | Tao Li | Tao Li | | HDFS-16332 | Expired block token causes slow read due to missing handling in sasl handshake | Major | datanode, dfs, dfsclient | Shinya Yoshida | Shinya Yoshida | | HADOOP-18021 | Provide a public wrapper of Configuration#substituteVars | Major | conf | Andras Gyori | Andras Gyori | | HDFS-16369 | RBF: Fix the retry logic of RouterRpcServer#invokeAtAvailableNs | Major | rbf | Ayush Saxena | Ayush Saxena | | HDFS-16370 | Fix assert message for BlockInfo | Minor | block placement | Tao Li | Tao Li | | HDFS-16293 | Client sleeps and holds dataQueue when DataNodes are congested | Major | hdfs-client | Yuanxin Zhu | Yuanxin Zhu | | YARN-9063 | ATS 1.5 fails to start if RollingLevelDb files are corrupt or missing | Major | timelineserver, timelineservice | Tarun Parimi | Ashutosh Gupta | | YARN-10757 | jsonschema2pojo-maven-plugin version is not defined | Major | build | Akira Ajisaka | Tamas Domok | | YARN-11023 | Extend the root QueueInfo with max-parallel-apps in CapacityScheduler | Major | capacity scheduler | Tamas Domok | Tamas Domok | | MAPREDUCE-7368 | DBOutputFormat.DBRecordWriter#write must throw exception when it fails | Major | mrv2 | Stamatis Zampetakis | Stamatis Zampetakis | | YARN-11016 | Queue weight is incorrectly reset to zero | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-16324 | Fix error log in BlockManagerSafeMode | Minor | hdfs | guophilipse | guophilipse | | HDFS-15788 | Correct the statement for pmem cache to reflect cache persistence support | Minor | documentation | Feilong He | Feilong He | | YARN-11006 | Allow overriding user limit factor and maxAMResourcePercent with AQCv2 templates | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HDFS-16333 | fix balancer bug when transfer an EC block | Major | balancer & mover, erasure-coding | qinyuren | qinyuren | | YARN-11020 | [UI2] No container is found for an application attempt with a single AM container | Major | yarn-ui-v2 | Andras Gyori | Andras Gyori | | HADOOP-18043 | Use mina-core 2.0.22 to fix LDAP unit test failures | Major | test | Akira Ajisaka | Akira Ajisaka | | HDFS-16373 | Fix MiniDFSCluster restart in case of multiple namenodes | Major | test | Ayush Saxena | Ayush Saxena | | HDFS-16014 | Fix an issue in checking native pmdk lib by hadoop checknative command | Major | native | Feilong He | Feilong He | | YARN-11045 | ATSv2 storage monitor fails to read from hbase cluster | Major | timelineservice | Viraj Jasani | Viraj Jasani | | YARN-11044 |" }, { "data": "has some ineffective asserts | Major | capacity scheduler, test | Benjamin Teke | Benjamin Teke | | HDFS-16377 | Should CheckNotNull before access FsDatasetSpi | Major | datanode | Tao Li | Tao Li | | YARN-10427 | Duplicate Job IDs in SLS output | Major | scheduler-load-simulator | Drew Merrill | Szilard Nemeth | | YARN-6862 | Nodemanager resource usage metrics sometimes are negative | Major | nodemanager | YunFan Zhou | Benjamin Teke | | HADOOP-13500 | Synchronizing iteration of Configuration properties object | Major | conf | Jason Darrell Lowe | Dhananjay Badaya | | YARN-11047 | ResourceManager and NodeManager unable to connect to Hbase when ATSv2 is enabled | Major | timelineservice | Minni Mittal | Viraj Jasani | | HDFS-16385 | Fix Datanode retrieve slownode information bug. | Major | datanode | Jackson Wang | Jackson Wang | | YARN-10178 | Global Scheduler async thread crash caused by Comparison method violates its general contract | Major | capacity scheduler | tuyu | Andras Gyori | | HDFS-16392 | TestWebHdfsFileSystemContract#testResponseCode fails | Major | test | secfree | secfree | | YARN-11053 | AuxService should not use class name as default system classes | Major | auxservices | Cheng Pan | Cheng Pan | | HDFS-16395 | Remove useless NNThroughputBenchmark#dummyActionNoSynch() | Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu | | HADOOP-18057 | Fix typo: validateEncrytionSecrets -> validateEncryptionSecrets | Major | fs/s3 | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18045 | Disable TestDynamometerInfra | Major | test | Akira Ajisaka | Akira Ajisaka | | HDFS-14099 | Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor | Major | compress, io | ZanderXu | ZanderXu | | HADOOP-18063 | Remove unused import AbstractJavaKeyStoreProvider in Shell class | Minor | command | JiangHua Zhu | JiangHua Zhu | | HDFS-16393 | RBF: Fix TestRouterRPCMultipleDestinationMountTableResolver | Major | rbf | Ayush Saxena | Ayush Saxena | | HDFS-16409 | Fix typo: testHasExeceptionsReturnsCorrectValue -> testHasExceptionsReturnsCorrectValue | Trivial | test | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16408 | Ensure LeaseRecheckIntervalMs is greater than zero | Major | namenode | ECFuzz | ECFuzz | | HDFS-16410 | Insecure Xml parsing in OfflineEditsXmlLoader | Minor | tools | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16417 | RBF: StaticRouterRpcFairnessPolicyController init fails with division by 0 if concurrent ns handler count is configured | Minor | rbf | Felix N | Felix N | | HADOOP-18077 | ProfileOutputServlet unable to proceed due to NPE | Major | common | Viraj Jasani | Viraj Jasani | | HDFS-16420 | Avoid deleting unique data blocks when deleting redundancy striped blocks | Critical | ec, erasure-coding | qinyuren | Jackson Wang | | YARN-11055 | In cgroups-operations.c some fprintf format strings dont end with \\n | Minor | nodemanager | Gera Shegalov | Gera Shegalov | | YARN-11065 | Bump follow-redirects from 1.13.3 to 1.14.7 in hadoop-yarn-ui | Major | yarn-ui-v2 | Akira Ajisaka | Akira Ajisaka | | HDFS-16402 | Improve HeartbeatManager logic to avoid incorrect stats | Major | datanode | Tao Li | Tao Li | | HADOOP-17593 | hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency | Major | build | Steve Loughran | lixianwei | | YARN-10561 | Upgrade node.js to 12.22.1 and yarn to 1.22.5 in YARN application catalog webapp | Critical | webapp | Akira Ajisaka | Akira Ajisaka | | HDFS-16303 | Losing over 100 datanodes in state decommissioning results in full blockage of all datanode decommissioning | Major | block placement, datanode | Kevin Wikant | Kevin Wikant | | HDFS-16443 | Fix edge case where DatanodeAdminDefaultMonitor doubly enqueues a DatanodeDescriptor on exception | Major | hdfs | Kevin Wikant | Kevin Wikant | | YARN-10822 | Containers going from New to Scheduled transition for killed container on recovery | Major | container, nodemanager | Minni Mittal | Minni Mittal | | HADOOP-18101 | Bump aliyun-sdk-oss to 3.13.2 and jdom2 to" }, { "data": "| Major | build, common | Aswin Shakil | Aswin Shakil | | HDFS-16411 | RBF: RouterId is NULL when set dfs.federation.router.rpc.enable=false | Major | rbf | YulongZ | YulongZ | | HDFS-16406 | DataNode metric ReadsFromLocalClient does not count short-circuit reads | Minor | datanode, metrics | secfree | secfree | | HADOOP-18096 | Distcp: Sync moves filtered file to home directory rather than deleting | Critical | tools/distcp | Ayush Saxena | Ayush Saxena | | HDFS-16449 | Fix hadoop web site release notes and changelog not available | Minor | documentation | guophilipse | guophilipse | | YARN-10788 | TestCsiClient fails | Major | test | Akira Ajisaka | Akira Ajisaka | | HADOOP-18126 | Update junit 5 version due to build issues | Major | build | PJ Fanning | PJ Fanning | | YARN-11068 | Update transitive log4j2 dependency to 2.17.1 | Major | buid | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-16316 | Improve DirectoryScanner: add regular file check related block | Major | datanode | JiangHua Zhu | JiangHua Zhu | | YARN-11071 | AutoCreatedQueueTemplate incorrect wildcard level | Major | capacity scheduler | Tamas Domok | Tamas Domok | | YARN-11070 | Minimum resource ratio is overridden by subsequent labels | Major | yarn | Andras Gyori | Andras Gyori | | YARN-11033 | isAbsoluteResource is not correct for dynamically created queues | Minor | yarn | Tamas Domok | Tamas Domok | | YARN-10894 | Follow up YARN-10237: fix the new test case in TestRMWebServicesCapacitySched | Major | webapp | Tamas Domok | Tamas Domok | | YARN-11022 | Fix the documentation for max-parallel-apps in CS | Major | capacity scheduler | Tamas Domok | Tamas Domok | | YARN-11042 | Fix testQueueSubmitWithACLsEnabledWithQueueMapping in TestAppManager | Major | yarn | Tamas Domok | Tamas Domok | | HADOOP-18151 | Switch the baseurl for Centos 8 | Blocker | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16496 | Snapshot diff on snapshotable directory fails with not snapshottable error | Major | namanode | Stephen ODonnell | Stephen ODonnell | | YARN-11067 | Resource overcommitment due to incorrect resource normalisation logical order | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HADOOP-18129 | Change URI[] in INodeLink to String[] to reduce memory footprint of ViewFileSystem | Major | viewfs | Abhishek Das | Abhishek Das | | HDFS-16503 | Should verify whether the path name is valid in the WebHDFS | Major | webhdfs | Tao Li | Tao Li | | YARN-11089 | Fix typo in RM audit log | Major | resourcemanager | Junfan Zhang | Junfan Zhang | | YARN-11087 | Introduce the config to control the refresh interval in RMDelegatedNodeLabelsUpdater | Major | nodelabel | Junfan Zhang | Junfan Zhang | | YARN-11100 | Fix StackOverflowError in SLS scheduler event handling | Major | scheduler-load-simulator | Szilard Nemeth | Szilard Nemeth | | HDFS-16523 | Fix dependency error in hadoop-hdfs on M1 Mac | Major | build | Akira Ajisaka | Akira Ajisaka | | HDFS-16498 | Fix NPE for checkBlockReportLease | Major | datanode, namanode | Tao Li | Tao Li | | YARN-11106 | Fix the test failure due to missing conf of" }, { "data": "| Major | test | Junfan Zhang | Junfan Zhang | | HDFS-16518 | KeyProviderCache close cached KeyProvider with Hadoop ShutdownHookManager | Major | hdfs | Lei Yang | Lei Yang | | HADOOP-18169 | getDelegationTokens in ViewFs should also fetch the token from the fallback FS | Major | fs | Xing Lin | Xing Lin | | YARN-11102 | Fix spotbugs error in hadoop-sls module | Major | scheduler-load-simulator | Akira Ajisaka | Szilard Nemeth | | HDFS-16479 | EC: NameNode should not send a reconstruction work when the source datanodes are insufficient | Critical | ec, erasure-coding | Yuanbo Liu | Takanobu Asanuma | | HDFS-16509 | Fix decommission UnsupportedOperationException: Remove unsupported | Major | namenode | daimin | daimin | | YARN-11107 | When NodeLabel is enabled for a YARN cluster, AM blacklist program does not work properly | Major | resourcemanager | Xiping Zhang | Xiping Zhang | | HDFS-16456 | EC: Decommission a rack with only on dn will fail when the rack number is equal with replication | Critical | ec, namenode | caozhiqiang | caozhiqiang | | HDFS-16535 | SlotReleaser should reuse the domain socket based on socket paths | Major | hdfs-client | Quanlong Huang | Quanlong Huang | | HADOOP-18109 | Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems | Major | viewfs | Chentao Yu | Chentao Yu | | HDFS-16422 | Fix thread safety of EC decoding during concurrent preads | Critical | dfsclient, ec, erasure-coding | daimin | daimin | | HDFS-16437 | ReverseXML processor doesnt accept XML files without the SnapshotDiffSection. | Critical | hdfs | yanbin.zhang | yanbin.zhang | | HDFS-16507 | [SBN read] Avoid purging edit log which is in progress | Critical | namanode | Tao Li | Tao Li | | YARN-10720 | YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging | Critical | webproxy | Qi Zhu | Qi Zhu | | HDFS-16428 | Source path with storagePolicy cause wrong typeConsumed while rename | Major | hdfs, namenode | lei w | lei w | | YARN-11014 | YARN incorrectly validates maximum capacity resources on the validation API | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-11075 | Explicitly declare serialVersionUID in LogMutation class | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HDFS-11041 | Unable to unregister FsDatasetState MBean if DataNode is shutdown twice | Trivial | datanode | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-18160 | `org.wildfly.openssl` should not be shaded by Hadoop build | Major | build | Andr F. | Andr F. | | HADOOP-18202 | create-release fails fatal: unsafe repository (/build/source is owned by someone else) | Major | build | Steve Loughran | Steve Loughran | | HDFS-16538 | EC decoding failed due to not enough valid inputs | Major | erasure-coding | qinyuren | qinyuren | | HDFS-16544 | EC decoding failed due to invalid buffer | Major | erasure-coding | qinyuren | qinyuren | | YARN-11111 | Recovery failure when node-label configure-type transit from delegated-centralized to centralized | Major | yarn | Junfan Zhang | Junfan Zhang | | HDFS-16552 | Fix NPE for TestBlockManager | Major | test | Tao Li | Tao Li | | MAPREDUCE-7246 | In MapredAppMasterRest#MapreduceApplicationMasterInfoAPI, the datatype of appId should be string. | Major | documentation | jenny | Ashutosh Gupta | | HADOOP-18216 | Document io.file.buffer.size must be greater than zero | Minor | io | ECFuzz | ECFuzz | | YARN-10187 | Removing hadoop-yarn-project/hadoop-yarn/README as it is no longer maintained. | Minor | documentation | N Sanketh Reddy | Ashutosh Gupta | | HDFS-16564 | Use uint32t for hdfsfind | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-16515 | Update the link to compatibility guide | Minor | documentation | Akira Ajisaka | Ashutosh Gupta | | HDFS-16185 | Fix comment in" }, { "data": "| Minor | documentation | Akira Ajisaka | Ashutosh Gupta | | HADOOP-17479 | Fix the examples of hadoop config prefix | Minor | documentation | Akira Ajisaka | Ashutosh Gupta | | MAPREDUCE-7376 | AggregateWordCount fetches wrong results | Major | aggregate | Ayush Saxena | Ayush Saxena | | HDFS-16572 | Fix typo in readme of hadoop-project-dist | Trivial | documentation | Gautham Banasandra | Gautham Banasandra | | HADOOP-18222 | Prevent DelegationTokenSecretManagerMetrics from registering multiple times | Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | HDFS-16525 | System.err should be used when error occurs in multiple methods in DFSAdmin class | Major | dfsadmin | yanbin.zhang | yanbin.zhang | | YARN-11123 | ResourceManager webapps test failures due to org.apache.hadoop.metrics2.MetricsException and subsequent java.net.BindException: Address already in use | Major | resourcemanager | Szilard Nemeth | Szilard Nemeth | | YARN-11073 | Avoid unnecessary preemption for tiny queues under certain corner cases | Major | capacity scheduler, scheduler preemption | Jian Chen | Jian Chen | | MAPREDUCE-7377 | Remove unused imports in MapReduce project | Minor | build | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16540 | Data locality is lost when DataNode pod restarts in kubernetes | Major | namenode | Huaxiang Sun | Huaxiang Sun | | YARN-11092 | Upgrade jquery ui to 1.13.1 | Major | buid | D M Murali Krishna Reddy | Ashutosh Gupta | | YARN-11133 | YarnClient gets the wrong EffectiveMinCapacity value | Major | api | Zilong Zhu | Zilong Zhu | | YARN-10850 | TimelineService v2 lists containers for all attempts when filtering for one | Major | timelinereader | Benjamin Teke | Benjamin Teke | | YARN-11126 | ZKConfigurationStore Java deserialisation vulnerability | Major | yarn | Tamas Domok | Tamas Domok | | YARN-11141 | Capacity Scheduler does not support ambiguous queue names when moving application across queues | Major | capacity scheduler | Andrs Gyri | Andrs Gyri | | YARN-11147 | ResourceUsage and QueueCapacities classes provide node label iterators that are not thread safe | Major | capacity scheduler | Andrs Gyri | Andrs Gyri | | YARN-11152 | QueueMetrics is leaking memory when creating a new queue during reinitialisation | Major | capacity scheduler | Andrs Gyri | Andrs Gyri | | HADOOP-18245 | Extend KMS related exceptions that get mapped to ConnectException | Major | kms | Ritesh Shukla | Ritesh Shukla | | HADOOP-18120 | Hadoop auth does not handle HTTP Headers in a case-insensitive way | Critical | auth | Daniel Fritsi | Jnos Makai | | HDFS-16453 | Upgrade okhttp from 2.7.5 to 4.9.3 | Major | hdfs-client | Ivan Viaznikov | Ashutosh Gupta | | YARN-11162 | Set the zk acl for nodes created by ZKConfigurationStore. | Major | resourcemanager | Owen OMalley | Owen OMalley | | HDFS-16586 | Purge FsDatasetAsyncDiskService threadgroup; it causes BPServiceActor$CommandProcessingThread IllegalThreadStateException fatal exception and exit | Major | datanode | Michael Stack | Michael Stack | | HADOOP-18251 | Fix failure of extracting JIRA id from commit message in" }, { "data": "| Minor | build | Masatake Iwasaki | Masatake Iwasaki | | HDFS-16561 | Handle error returned by strtol | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-11128 | Fix comments in TestProportionalCapacityPreemptionPolicy* | Minor | capacityscheduler, documentation | Ashutosh Gupta | Ashutosh Gupta | | HDFS-15225 | RBF: Add snapshot counts to content summary in router | Major | rbf | Quan Li | Ayush Saxena | | HDFS-16583 | DatanodeAdminDefaultMonitor can get stuck in an infinite loop | Major | datanode | Stephen ODonnell | Stephen ODonnell | | HDFS-16604 | Install gtest via FetchContent_Declare in CMake | Blocker | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HADOOP-18268 | Install Maven from Apache archives | Blocker | build | Gautham Banasandra | Gautham Banasandra | | HADOOP-18274 | Use CMake 3.19.0 in Debian 10 | Blocker | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16602 | Use defined directive along with #if | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16608 | Fix the link in TestClientProtocolForPipelineRecovery | Minor | documentation | Samrat Deb | Samrat Deb | | HDFS-16563 | Namenode WebUI prints sensitive information on Token Expiry | Major | namanode, security, webhdfs | Renukaprasad C | Renukaprasad C | | HDFS-16623 | IllegalArgumentException in LifelineSender | Major | datanode | ZanderXu | ZanderXu | | HDFS-16628 | RBF: Correct target directory when move to trash for kerberos login user. | Major | rbf | Xiping Zhang | Xiping Zhang | | HDFS-16064 | Determine when to invalidate corrupt replicas based on number of usable replicas | Major | datanode, namenode | Kevin Wikant | Kevin Wikant | | YARN-9827 | Fix Http Response code in GenericExceptionHandler. | Major | webapp | Abhishek Modi | Ashutosh Gupta | | HDFS-16635 | Fix javadoc error in Java 11 | Major | build, documentation | Akira Ajisaka | Ashutosh Gupta | | MAPREDUCE-7387 | Fix TestJHSSecurity#testDelegationToken AssertionError due to HDFS-16563 | Major | test | Shilun Fan | Shilun Fan | | MAPREDUCE-7369 | MapReduce tasks timing out when spends more time on MultipleOutputs#close | Major | mrv1, mrv2 | Prabhu Joseph | Ashutosh Gupta | | YARN-11185 | Pending app metrics are increased doubly when a queue reaches its max-parallel-apps limit | Major | capacity scheduler | Andrs Gyri | Andrs Gyri | | MAPREDUCE-7389 | Typo in description of mapreduce.application.classpath in mapred-default.xml | Trivial | mrv2 | Christian Bartolomus | Christian Bartolomus | | HADOOP-18159 | Certificate doesnt match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com] | Major | fs/s3 | Andr F. | Andr F. | | YARN-9971 | YARN Native Service HttpProbe logs THIS_HOST in error messages | Minor | yarn-native-services | Prabhu Joseph | Ashutosh Gupta | | MAPREDUCE-7391 | TestLocalDistributedCacheManager failing after HADOOP-16202 | Major | test | Steve Loughran | Steve Loughran | | YARN-11188 | Only files belong to the first file controller are removed even if multiple log aggregation file controllers are configured | Major | log-aggregation | Szilard Nemeth | Szilard Nemeth | | YARN-10974 | CS UI: queue filter and openQueues param do not work as expected | Major | capacity scheduler | Chengbing Liu | Chengbing Liu | | HADOOP-18237 | Upgrade Apache Xerces Java to 2.12.2 | Major | build | Ashutosh Gupta | Ashutosh Gupta | | YARN-10320 | Replace FSDataInputStream#read with readFully in Log Aggregation | Major | log-aggregation | Prabhu Joseph | Ashutosh Gupta | | YARN-10303 | One yarn rest api example of yarn document is error | Minor | documentation | bright.zhou | Ashutosh Gupta | | HDFS-16633 | Reserved Space For Replicas is not released on some cases | Major | hdfs | Prabhu Joseph | Ashutosh Gupta | | HDFS-16591 | StateStoreZooKeeper fails to initialize | Major | rbf | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | YARN-11204 | Various MapReduce tests fail with NPE in" }, { "data": "| Major | log-aggregation | Szilard Nemeth | Szilard Nemeth | | HADOOP-18321 | Fix when to read an additional record from a BZip2 text file split | Critical | io | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-15789 | DistCp does not clean staging folder if class extends DistCp | Minor | tools/distcp | Lawrence Andrews | Lawrence Andrews | | HADOOP-18100 | Change scope of inner classes in InodeTree to make them accessible outside package | Major | viewfs | Abhishek Das | Abhishek Das | | HADOOP-18217 | shutdownhookmanager should not be multithreaded (deadlock possible) | Minor | util | Catherinot Remi | Catherinot Remi | | YARN-11198 | Deletion of assigned resources (e.g. GPUs, NUMA, FPGAs) from State Store | Major | nodemanager | Prabhu Joseph | Samrat Deb | | HADOOP-18074 | Partial/Incomplete groups list can be returned in LDAP groups lookup | Major | security | Philippe Lanoe | Larry McCay | | HDFS-16566 | Erasure Coding: Recovery may cause excess replicas when busy DN exsits | Major | ec, erasure-coding | Ruinan Gu | Ruinan Gu | | HDFS-16654 | Link OpenSSL lib for CMake deps check | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-11192 | TestRouterWebServicesREST failing after YARN-9827 | Major | federation | Shilun Fan | Shilun Fan | | HDFS-16665 | Fix duplicate sources for hdfspptestshim_static | Critical | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16667 | Use malloc for buffer allocation in uriparser2 | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | YARN-11211 | QueueMetrics leaks Configuration objects when validation API is called multiple times | Major | capacity scheduler | Andrs Gyri | Andrs Gyri | | HDFS-16680 | Skip libhdfspp Valgrind tests on Windows | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | HDFS-16681 | Do not pass GCC flags for MSVC in libhdfspp | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | MAPREDUCE-7372 | MapReduce set permission too late in copyJar method | Major | mrv2 | Zhang Dongsheng | Zhang Dongsheng | | HDFS-16533 | COMPOSITE_CRC failed between replicated file and striped file due to invalid requested length | Major | hdfs, hdfs-client | ZanderXu | ZanderXu | | YARN-11210 | Fix YARN RMAdminCLI retry logic for non-retryable kerberos configuration exception | Major | client | Kevin Wikant | Kevin Wikant | | HADOOP-18079 | Upgrade Netty to 4.1.77.Final | Major | build | Renukaprasad C | Wei-Chiu Chuang | | HADOOP-18364 | All method metrics related to the RPC protocol should be initialized | Major | metrics | Shuyan Zhang | Shuyan Zhang | | HADOOP-18363 | Fix bug preventing hadoop-metrics2 from emitting metrics to > 1 Ganglia servers. | Major | metrics | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18390 | Fix out of sync import for HADOOP-18321 | Minor | common | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18387 | Fix incorrect placeholder in hadoop-common | Minor | common | Shilun Fan | Shilun Fan | | YARN-11237 | Bug while disabling proxy failover with Federation | Major | federation | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18383 | Codecs with @DoNotPool annotation are not closed causing memory leak | Major | common | Kevin Sewell | Kevin Sewell | | HADOOP-18404 | Fix broken link to wiki help page in" }, { "data": "| Major | documentation | Paul King | Paul King | | HDFS-16676 | DatanodeAdminManager$Monitor reports a node as invalid continuously | Major | namenode | Prabhu Joseph | Ashutosh Gupta | | YARN-11254 | hadoop-minikdc dependency duplicated in hadoop-yarn-server-nodemanager | Minor | nodemanager | Clara Fang | Clara Fang | | HDFS-16729 | RBF: fix some unreasonably annotated docs | Major | documentation, rbf | JiangHua Zhu | JiangHua Zhu | | YARN-9425 | Make initialDelay configurable for FederationStateStoreService#scheduledExecutorService | Major | federation | Shen Yinjie | Ashutosh Gupta | | HDFS-4043 | Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name. | Major | security | Ahad Rana | Steve Vaughan | | HDFS-16724 | RBF should support get the information about ancestor mount points | Major | rbf | ZanderXu | ZanderXu | | MAPREDUCE-7403 | Support spark dynamic partitioning in the Manifest Committer | Major | mrv2 | Steve Loughran | Steve Loughran | | HDFS-16728 | RBF throw IndexOutOfBoundsException with disableNameServices | Major | rbf | ZanderXu | ZanderXu | | HDFS-16738 | Invalid CallerContext caused NullPointerException | Critical | namanode | ZanderXu | ZanderXu | | HDFS-16732 | [SBN READ] Avoid get location from observer when the block report is delayed. | Critical | hdfs | Chenyu Zheng | Chenyu Zheng | | HDFS-16734 | RBF: fix some bugs when handling getContentSummary RPC | Major | rbf | ZanderXu | ZanderXu | | HADOOP-18375 | Fix failure of shelltest for hadoopaddldlibpath | Minor | test | Masatake Iwasaki | Masatake Iwasaki | | YARN-11287 | Fix NoClassDefFoundError: org/junit/platform/launcher/core/LauncherFactory after YARN-10793 | Major | build, test | Shilun Fan | Shilun Fan | | HADOOP-18428 | Parameterize platform toolset version | Major | common | Gautham Banasandra | Gautham Banasandra | | HDFS-16755 | TestQJMWithFaults.testUnresolvableHostName() can fail due to unexpected host resolution | Minor | test | Steve Vaughan | Steve Vaughan | | HDFS-16750 | NameNode should use NameNode.getRemoteUser() to log audit event to avoid possible NPE | Major | namanode | ZanderXu | ZanderXu | | HDFS-16593 | Correct inaccurate BlocksRemoved metric on DataNode side | Minor | datanode, metrics | ZanderXu | ZanderXu | | HDFS-16748 | RBF: DFSClient should uniquely identify writing files by namespace id and iNodeId | Critical | rbf | ZanderXu | ZanderXu | | HDFS-16659 | JournalNode should throw NewerTxnIdException if SinceTxId is bigger than HighestWrittenTxId | Critical | journal-node | ZanderXu | ZanderXu | | HADOOP-18426 | Improve the accuracy of MutableStat mean | Major | common | Shuyan Zhang | Shuyan Zhang | | HDFS-16756 | RBF proxies the clients user by the login user to enable CacheEntry | Major | rbf | ZanderXu | ZanderXu | | YARN-11301 | Fix NoClassDefFoundError: org/junit/platform/launcher/core/LauncherFactory after YARN-11269 | Major | test, timelineserver | Shilun Fan | Shilun Fan | | HADOOP-18452 | Fix TestKMS#testKMSHAZooKeeperDelegationToken Failed By Hadoop-18427 | Major | common, test | Shilun Fan | Shilun Fan | | HADOOP-18400 | Fix file split duplicating records from a succeeding split when reading BZip2 text files | Critical | common | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16772 | refreshHostsReader should use the new configuration | Major | datanode, namanode | ZanderXu | ZanderXu | | YARN-11305 | Fix TestLogAggregationService#testLocalFileDeletionAfterUpload Failed After YARN-11241(#4703) | Major | log-aggregation | Shilun Fan | Shilun Fan | | HADOOP-16674 | TestDNS.testRDNS can fail with ServiceUnavailableException | Minor | common, net | Steve Loughran | Ashutosh Gupta | | HDFS-16706 | ViewFS doc points to wrong mount table name | Minor | documentation | Prabhu Joseph | Samrat Deb | | YARN-11296 |" }, { "data": "SQLFederationStateStore#Sql script bug | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18456 | NullPointerException in ObjectListingIterators constructor | Blocker | fs/s3 | Quanlong Huang | Steve Loughran | | HADOOP-18444 | Add Support for localized trash for ViewFileSystem in Trash.moveToAppropriateTrash | Major | fs | Xing Lin | Xing Lin | | HADOOP-18443 | Upgrade snakeyaml to 1.32 | Major | security | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16766 | hdfs ec command loads (administrator provided) erasure code policy files without disabling xml entity expansion | Major | erasure-coding, security | Jing | Ashutosh Gupta | | HDFS-16798 | SerialNumberMap should decrease current counter if the item exist | Major | namanode | ZanderXu | ZanderXu | | HDFS-16777 | datatables@1.10.17 sonatype-2020-0988 vulnerability | Major | ui | Eugene Shinn (Truveta) | Ashutosh Gupta | | YARN-10680 | Revisit try blocks without catch blocks but having finally blocks | Minor | scheduler-load-simulator | Szilard Nemeth | Susheel Gupta | | YARN-11039 | LogAggregationFileControllerFactory::getFileControllerForRead can leak threads | Blocker | log-aggregation | Rajesh Balamohan | Steve Loughran | | HADOOP-18471 | An unhandled ArrayIndexOutOfBoundsException in DefaultStringifier.storeArray() if provided with an empty input | Minor | common, io | ConfX | ConfX | | HADOOP-9946 | NumAllSinks metrics shows lower value than NumActiveSinks | Major | metrics | Akira Ajisaka | Ashutosh Gupta | | YARN-11357 | Fix FederationClientInterceptor#submitApplication Cant Update SubClusterId | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18499 | S3A to support HTTPS web proxies | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | MAPREDUCE-7426 | Fix typo in class StartEndTImesBase | Trivial | mrv2 | Samrat Deb | Samrat Deb | | YARN-11365 | Fix NM class not found on Windows | Blocker | yarn | Gautham Banasandra | Gautham Banasandra | | HADOOP-18233 | Initialization race condition with TemporaryAWSCredentialsProvider | Major | auth, fs/s3 | Jason Sleight | Jimmy Wong | | MAPREDUCE-7425 | Document Fix for yarn.app.mapreduce.client-am.ipc.max-retries | Major | yarn | teng wang | teng wang | | MAPREDUCE-7386 | Maven parallel builds (skipping tests) fail | Critical | build | Steve Vaughan | Steve Vaughan | | YARN-11367 | [Federation] Fix DefaultRequestInterceptorREST Client NPE | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18504 | An unhandled NullPointerException in class KeyProvider | Major | common | ConfX | ConfX | | HDFS-16834 | PoolAlignmentContext should not max poolLocalStateId with sharedGlobalStateId when sending requests to the namenode. | Major | namnode | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-18528 | Disable abfs prefetching by default | Major | fs/azure | Mehakmeet Singh | Mehakmeet Singh | | HDFS-16836 | StandbyCheckpointer can still trigger rollback fs image after RU is finalized | Major | hdfs | Lei Yang | Lei Yang | | HADOOP-18429 | MutableGaugeFloat#incr(float) get stuck in an infinite loop | Major | metrics | asdfgh19 | Ashutosh Gupta | | HADOOP-18324 | Interrupting RPC Client calls can lead to thread exhaustion | Critical | ipc | Owen OMalley | Owen OMalley | | HADOOP-8728 | Display (fs -text) shouldnt hard-depend on Writable serialized sequence files. | Minor | fs | Harsh J | Ashutosh Gupta | | HADOOP-18532 | Update command usage in" }, { "data": "| Trivial | documentation | guophilipse | guophilipse | | HDFS-16832 | [SBN READ] Fix NPE when check the block location of empty directory | Major | namanode | Chenyu Zheng | Chenyu Zheng | | HDFS-16547 | [SBN read] Namenode in safe mode should not be transfered to observer state | Major | namanode | Tao Li | Tao Li | | YARN-8262 | get_executable in container-executor should provide meaningful error codes | Minor | container-executor | Miklos Szegedi | Susheel Gupta | | HDFS-16838 | Fix NPE in testAddRplicaProcessorForAddingReplicaInMap | Major | test | ZanderXu | ZanderXu | | HDFS-16826 | [RBF SBN] ConnectionManager should advance the client stateId for every request | Major | namanode, rbf | ZanderXu | ZanderXu | | HADOOP-18498 | [ABFS]: Error introduced when SAS Token containing ? prefix is passed | Minor | fs/azure | Sree Bhattacharyya | Sree Bhattacharyya | | HDFS-16845 | Add configuration flag to enable observer reads on routers without using ObserverReadProxyProvider | Critical | configuration | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HDFS-16847 | RBF: StateStore writer should not commit tmp fail if there was an error in writing the file. | Critical | hdfs, rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-18408 | [ABFS]: ITestAbfsManifestCommitProtocol fails on nonHNS configuration | Minor | fs/azure, test | Pranav Saxena | Sree Bhattacharyya | | HDFS-16550 | [SBN read] Improper cache-size for journal node may cause cluster crash | Major | journal-node | Tao Li | Tao Li | | HDFS-16809 | EC striped block is not sufficient when doing in maintenance | Major | ec, erasure-coding | dingshun | dingshun | | HDFS-16837 | [RBF SBN] ClientGSIContext should merge RouterFederatedStates to get the max state id for each namespace | Major | rbf | ZanderXu | ZanderXu | | HADOOP-18402 | S3A committer NPE in spark job abort | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | YARN-10978 | YARN-10978. Fix ApplicationClassLoader to Correctly Expand Glob for Windows Path | Major | utils | Akshat Bordia | Akshat Bordia | | YARN-11386 | Fix issue with classpath resolution | Critical | nodemanager | Gautham Banasandra | Gautham Banasandra | | YARN-11390 | TestResourceTrackerService.testNodeRemovalNormally: Shutdown nodes should be 0 now expected: <1> but was: <0> | Major | yarn | Bence Kosztolnik | Bence Kosztolnik | | HDFS-16868 | Fix audit log duplicate issue when an ACE occurs in FSNamesystem. | Major | fs | Beibei Zhao | Beibei Zhao | | HADOOP-18569 | NFS Gateway may release buffer too early | Blocker | nfs | Attila Doroszlai | Attila Doroszlai | | HADOOP-18574 | Changing log level of IOStatistics increment to make the DEBUG logs less noisy | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | HDFS-16852 | Register the shutdown hook only when not in shutdown for KeyProviderCache constructor | Minor | hdfs | Xing Lin | Xing Lin | | HADOOP-18567 | LogThrottlingHelper: the dependent recorder is not triggered correctly | Major | common | Chengbing Liu | Chengbing Liu | | HDFS-16871 | DiskBalancer process may throws IllegalArgumentException when the target DataNode has capital letter in hostname | Major | datanode | Daniel Ma | Daniel Ma | | HDFS-16689 | Standby NameNode crashes when transitioning to Active with in-progress tailer | Critical | namanode | ZanderXu | ZanderXu | | HDFS-16831 | [RBF SBN] GetNamenodesForNameserviceId should shuffle Observer NameNodes every time | Major | namanode, rbf | ZanderXu | ZanderXu | | YARN-11395 | Resource Manager UI, cluster/appattempt/*, can not present FINAL_SAVING state | Critical | yarn | Bence Kosztolnik | Bence Kosztolnik | | YARN-10879 | Incorrect WARN text in ACL check for application tag based placement | Minor | resourcemanager | Brian Goerlitz | Susheel Gupta | | HDFS-16861 | RBF. Truncate API always fails when dirs use AllResolver oder on Router | Major | rbf | Max Xie | Max Xie | | YARN-11392 | ClientRMService implemented getCallerUgi and verifyUserAccessForRMApp methods but forget to use sometimes, caused audit log" }, { "data": "| Major | yarn | Beibei Zhao | Beibei Zhao | | HDFS-16881 | Warn if AccessControlEnforcer runs for a long time to check permission | Major | namanode | Tsz-wo Sze | Tsz-wo Sze | | HDFS-16877 | Namenode doesnt use alignment context in TestObserverWithRouter | Major | hdfs, rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-18581 | Handle Server KDC re-login when Server and Client run in same JVM. | Major | common | Surendra Singh Lilhore | Surendra Singh Lilhore | | HDFS-16885 | Fix TestHdfsConfigFields#testCompareConfigurationClassAgainstXml failed | Major | configuration, namanode | Haiyang Hu | Haiyang Hu | | HDFS-16872 | Fix log throttling by declaring LogThrottlingHelper as static members | Major | namanode | Chengbing Liu | Chengbing Liu | | HDFS-16884 | Fix TestFsDatasetImpl#testConcurrentWriteAndDeleteBlock failed | Major | test | Haiyang Hu | Haiyang Hu | | YARN-11190 | CS Mapping rule bug: User matcher does not work correctly for usernames with dot | Major | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-11413 | Fix Junit Test ERROR Introduced By YARN-6412 | Major | api | Shilun Fan | Shilun Fan | | HADOOP-18591 | Fix a typo in Trash | Minor | documentation | xiaoping.huang | xiaoping.huang | | MAPREDUCE-7375 | JobSubmissionFiles dont set right permission after mkdirs | Major | mrv2 | Zhang Dongsheng | Zhang Dongsheng | | HDFS-16764 | ObserverNamenode handles addBlock rpc and throws a FileNotFoundException | Critical | namanode | ZanderXu | ZanderXu | | HDFS-16876 | Garbage collect map entries in shared RouterStateIdContext using information from namenodeResolver instead of the map of active connectionPools. | Critical | rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-17717 | Update wildfly openssl to 1.1.3.Final | Major | build, common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-18601 | Fix build failure with docs profile | Major | build | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-18598 | maven site generation doesnt include javadocs | Blocker | site | Steve Loughran | Steve Loughran | | HDFS-16821 | Fix regression in HDFS-13522 that enables observer reads by" }, { "data": "| Major | hdfs | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-18584 | [NFS GW] Fix regression after netty4 migration | Major | common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-18279 | Cancel fileMonitoringTimer even if trustManager isnt defined | Major | common, test | Steve Vaughan | Steve Vaughan | | HADOOP-18576 | Java 11 JavaDoc fails due to missing package comments | Major | build, common | Steve Loughran | Steve Vaughan | | HADOOP-18612 | Avoid mixing canonical and non-canonical when performing comparisons | Minor | common, test | Steve Vaughan | Steve Vaughan | | HDFS-16895 | NamenodeHeartbeatService should use credentials of logged in user | Major | rbf | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | HDFS-16910 | Fix incorrectly initializing RandomAccessFile caused flush performance decreased for JN | Major | namanode | Haiyang Hu | Haiyang Hu | | HDFS-16761 | Namenode UI for Datanodes page not loading if any data node is down | Major | namenode, ui | Krishna Reddy | Zita Dombi | | HDFS-16925 | Namenode audit log to only include IP address of client | Major | namanode | Viraj Jasani | Viraj Jasani | | YARN-11408 | Add a check of autoQueueCreation is disabled for emitDefaultUserLimitFactor method | Major | yarn | Susheel Gupta | Susheel Gupta | | HADOOP-18582 | No need to clean tmp files in distcp direct mode | Major | tools/distcp | 10000kang | 10000kang | | HADOOP-18641 | cyclonedx maven plugin breaks builds on recent maven releases (3.9.0) | Major | build | Steve Loughran | Steve Loughran | | MAPREDUCE-7428 | Fix failures related to Junit 4 to Junit 5 upgrade in org.apache.hadoop.mapreduce.v2.app.webapp | Critical | test | Ashutosh Gupta | Akira Ajisaka | | HADOOP-18636 | LocalDirAllocator cannot recover from directory tree deletion during the life of a filesystem client | Minor | fs, fs/azure, fs/s3 | Steve Loughran | Steve Loughran | | MAPREDUCE-7434 | Fix ShuffleHandler tests | Major | tets | Tamas Domok | Tamas Domok | | HDFS-16935 | TestFsDatasetImpl.testReportBadBlocks brittle | Minor | test | Steve Loughran | Viraj Jasani | | HDFS-16923 | The getListing RPC will throw NPE if the path does not exist | Critical | namenode | ZanderXu | ZanderXu | | HDFS-16896 | HDFS Client hedged read has increased failure rate than without hedged read | Major | hdfs-client | Tom McCormick | Tom McCormick | | YARN-11383 | Workflow priority mappings is case sensitive | Major | yarn | Aparajita Choudhary | Aparajita Choudhary | | HDFS-16939 | Fix the thread safety bug in LowRedundancyBlocks | Major | namanode | Shuyan Zhang | Shuyan Zhang | | HDFS-16934 | org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig regression | Minor | dfsadmin, test | Steve Loughran | Shilun Fan | | HDFS-16942 | Send error to datanode if FBR is rejected due to bad lease | Major | datanode, namenode | Stephen ODonnell | Stephen ODonnell | | HADOOP-18668 | Path capability probe for truncate is only honored by RawLocalFileSystem | Major | fs, httpfs, viewfs | Viraj Jasani | Viraj Jasani | | HADOOP-18666 | A whitelist of endpoints to skip Kerberos authentication doesnt work for ResourceManager and Job History Server | Major | security | YUBI LEE | YUBI LEE | | HADOOP-18329 | Add support for IBM Semeru OE JRE 11.0.15.0 and greater | Major | auth, common | Jack | Jack | | HADOOP-18662 | ListFiles with recursive fails with FNF | Major | common | Ayush Saxena | Ayush Saxena | | YARN-11461 | NPE in determineMissingParents when the queue is invalid | Major | capacity scheduler | Tamas Domok | Tamas Domok | | HADOOP-18548 | Hadoop Archive tool (HAR) should acquire delegation tokens from source and destination file systems | Major | tools | Wei-Chiu Chuang | Szabolcs Gl | | HADOOP-18680 | Insufficient heap during full test runs in Docker container. | Minor | build | Chris Nauroth | Chris Nauroth | | HDFS-16949 | Update ReadTransferRate to ReadLatencyPerGB for effective percentile metrics | Minor | datanode | Ravindra Dingankar | Ravindra Dingankar | | HDFS-16911 | Distcp with snapshot diff to support Ozone filesystem. | Major | distcp | Sadanand Shenoy | Sadanand Shenoy | | YARN-11326 | [Federation] Add RM FederationStateStoreService Metrics | Major | federation, resourcemanager | Shilun Fan | Shilun Fan | | HDFS-16982 | Use the right Quantiles Array for Inverse Quantiles snapshot | Minor | datanode, metrics | Ravindra Dingankar | Ravindra Dingankar | | HDFS-16954 | RBF: The operation of renaming a multi-subcluster directory to a single-cluster directory should throw ioexception | Minor | rbf | Max Xie | Max Xie | | HDFS-16986 | EC: Fix locationBudget in getListing() | Major | erasure-coding | Shuyan Zhang | Shuyan Zhang | | HADOOP-18714 | Wrong StringUtils.join() called in AbstractContractRootDirectoryTest | Trivial | test | Attila Doroszlai | Attila Doroszlai | | HADOOP-18705 | ABFS should exclude incompatible credential providers | Major | fs/azure | Tamas Domok | Tamas Domok | | HDFS-16975 | FileWithSnapshotFeature.isCurrentFileDeleted is not reloaded from" }, { "data": "| Major | namanode | Tsz-wo Sze | Tsz-wo Sze | | HADOOP-18660 | Filesystem Spelling Mistake | Trivial | fs | Sebastian Baunsgaard | Sebastian Baunsgaard | | MAPREDUCE-7437 | MR Fetcher class to use an AtomicInteger to generate IDs. | Major | build, client | Steve Loughran | Steve Loughran | | HDFS-16672 | Fix lease interval comparison in BlockReportLeaseManager | Trivial | namenode | dzcxzl | dzcxzl | | YARN-11459 | Consider changing label called max resource on UIv1 and UIv2 | Major | yarn, yarn-ui-v2 | Riya Khandelwal | Riya Khandelwal | | HDFS-16972 | Delete a snapshot may deleteCurrentFile | Major | snapshots | Tsz-wo Sze | Tsz-wo Sze | | YARN-11482 | Fix bug of DRF comparison DominantResourceFairnessComparator2 in fair scheduler | Major | fairscheduler | Xiaoqiao He | Xiaoqiao He | | HDFS-16897 | Fix abundant Broken pipe exception in BlockSender | Minor | hdfs | fanluo | fanluo | | HADOOP-18729 | Fix mvnsite on Windows 10 | Critical | build, site | Gautham Banasandra | Gautham Banasandra | | HDFS-16865 | RBF: The source path is always / after RBF proxied the complete, addBlock and getAdditionalDatanode RPC. | Major | rbf | ZanderXu | ZanderXu | | HADOOP-18734 | Create qbt.sh symlink on Windows | Critical | build | Gautham Banasandra | Gautham Banasandra | | HDFS-16999 | Fix wrong use of processFirstBlockReport() | Major | block placement | Shuyan Zhang | Shuyan Zhang | | YARN-11467 | RM failover may fail when the nodes.exclude-path file does not exist | Minor | resourcemanager | dzcxzl | dzcxzl | | HDFS-16985 | Fix data missing issue when delete local block file. | Major | datanode | Chengwei Wang | Chengwei Wang | | YARN-11489 | Fix memory leak of DelegationTokenRenewer futures in DelegationTokenRenewerPoolTracker | Major | resourcemanager | Chun Chen | Chun Chen | | YARN-11312 | [UI2] Refresh buttons dont work after EmberJS upgrade | Minor | yarn-ui-v2 | Brian Goerlitz | Susheel Gupta | | HADOOP-18652 | Path.suffix raises NullPointerException | Minor | hdfs-client | Patrick Grandjean | Patrick Grandjean | | HDFS-17018 | Improve dfsclient log format | Minor | dfsclient | Xianming Lei | Xianming Lei | | HDFS-16697 | Add logs if resources are not available in NameNodeResourcePolicy | Minor | namenode | ECFuzz | ECFuzz | | HADOOP-17518 | Usage of incorrect regex range A-z | Minor | httpfs | Marcono1234 | Nishtha Shah | | HDFS-17022 | Fix the exception message to print the Identifier pattern | Minor | httpfs | Nishtha Shah | Nishtha Shah | | HDFS-17017 | Fix the issue of arguments number limit in report command in DFSAdmin. | Major | dfsadmin | Haiyang Hu | Haiyang Hu | | HADOOP-18746 | Install Python 3 for Windows 10 docker image | Major | build | Gautham Banasandra | Gautham Banasandra | | YARN-11490 | JMX QueueMetrics breaks after mutable config validation in CS | Major | capacityscheduler | Tamas Domok | Tamas Domok | | HDFS-17000 | Potential infinite loop in TestDFSStripedOutputStreamUpdatePipeline.testDFSStripedOutputStreamUpdatePipeline | Major | test | Marcono1234 | Marcono1234 | | HDFS-17027 | RBF: Add supports for observer.auto-msync-period when using routers | Major | rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HDFS-16996 | Fix flaky testFsCloseAfterClusterShutdown in TestFileCreation | Major | test | Uma Maheswara Rao G | Nishtha Shah | | HDFS-16983 | Fix concat operation doesnt honor" }, { "data": "| Major | namenode | caozhiqiang | caozhiqiang | | HDFS-17011 | Fix the metric of HttpPort at DataNodeInfo | Minor | datanode | Zhaohui Wang | Zhaohui Wang | | HDFS-17019 | Optimize the logic for reconfigure slow peer enable for Namenode | Major | namanode | Haiyang Hu | Haiyang Hu | | HDFS-17003 | Erasure Coding: invalidate wrong block after reporting bad blocks from datanode | Critical | namenode | farmmamba | farmmamba | | HADOOP-18718 | Fix several maven build warnings | Minor | build | Dongjoon Hyun | Dongjoon Hyun | | MAPREDUCE-7435 | ManifestCommitter OOM on azure job | Major | client | Steve Loughran | Steve Loughran | | HDFS-16946 | RBF: top real owners metrics cant been parsed json string | Minor | rbf | Max Xie | Nishtha Shah | | HDFS-17041 | RBF: Fix putAll impl for mysql and file based state stores | Major | rbf | Viraj Jasani | Viraj Jasani | | HDFS-17045 | File renamed from a snapshottable dir to a non-snapshottable dir cannot be deleted. | Major | namenode, snapshots | Tsz-wo Sze | Tsz-wo Sze | | YARN-11513 | Applications submitted to ambiguous queue fail during recovery if Specified Placement Rule is used | Major | yarn | Susheel Gupta | Susheel Gupta | | MAPREDUCE-7441 | Race condition in closing FadvisedFileRegion | Major | yarn | Benjamin Teke | Benjamin Teke | | HADOOP-18751 | Fix incorrect output path in javadoc build phase | Critical | build | Gautham Banasandra | Gautham Banasandra | | HDFS-17052 | Improve BlockPlacementPolicyRackFaultTolerant to avoid choose nodes failed when no enough Rack. | Major | namanode | Hualong Zhang | Hualong Zhang | | YARN-11528 | Lock triple-beam to the version compatible with node.js 12 to avoid compilation error | Major | build | Ayush Saxena | Masatake Iwasaki | | YARN-11464 | TestFSQueueConverter#testAutoCreateV2FlagsInWeightMode has a missing dot before auto-queue-creation-v2.enabled for method call assertNoValueForQueues | Major | yarn | Susheel Gupta | Susheel Gupta | | HDFS-17081 | EC: Add logic for striped blocks in isSufficientlyReplicated | Major | erasure-coding | Haiyang Hu | Haiyang Hu | | HADOOP-18757 | S3A Committer only finalizes the commits in a single thread | Major | fs/s3 | Moditha Hewasinghage | Moditha Hewasinghage | | HDFS-17094 | EC: Fix bug in block recovery when there are stale datanodes | Major | erasure-coding | Shuyan Zhang | Shuyan Zhang | | YARN-9877 | Intermittent TIME_OUT of LogAggregationReport | Major | log-aggregation, resourcemanager, yarn | Adam Antal | Adam Antal | | HDFS-17067 | Use BlockingThreadPoolExecutorService for nnProbingThreadPool in ObserverReadProxy | Major | hdfs | Xing Lin | Xing Lin | | YARN-11534 | Incorrect exception handling during container recovery | Major | yarn | Peter Szucs | Peter Szucs | | HADOOP-18807 | Close child file systems in ViewFileSystem when cache is disabled. | Major | fs | Shuyan Zhang | Shuyan Zhang | | MAPREDUCE-7442 | exception message is not intusive when accessing the job configuration web UI | Major | applicationmaster | Jiandan Yang | Jiandan Yang | | HADOOP-18823 | Add Labeler Github Action. | Major | build | Ayush Saxena | Ayush Saxena | | HDFS-17111 | RBF: Optimize msync to only call nameservices that have observer reads enabled. | Major | rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | YARN-11539 | Flexible AQC: setting capacity with leaf-template doesnt work | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-11538 | CS UI: queue filter do not work as expected when submitting apps with leaf queues name | Major | resourcemanager | Jiandan Yang | Jiandan Yang | | HDFS-17134 | RBF: Fix duplicate results of getListing through" }, { "data": "| Major | rbf | Shuyan Zhang | Shuyan Zhang | | MAPREDUCE-7446 | NegativeArraySizeException when running MR jobs with large data size | Major | mrv1 | Peter Szucs | Peter Szucs | | YARN-11545 | FS2CS not converts ACLs when all users are allowed | Major | yarn | Peter Szucs | Peter Szucs | | HDFS-17122 | Rectify the table length discrepancy in the DataNode UI. | Major | ui | Hualong Zhang | Hualong Zhang | | HADOOP-18826 | abfs getFileStatus(/) fails with Value for one of the query parameters specified in the request URI is invalid., 400 | Major | fs/azure | Sergey Shabalov | Anuj Modi | | HDFS-17150 | EC: Fix the bug of failed lease recovery. | Major | erasure-coding | Shuyan Zhang | Shuyan Zhang | | HDFS-17154 | EC: Fix bug in updateBlockForPipeline after failover | Major | erasure-coding | Shuyan Zhang | Shuyan Zhang | | HDFS-17156 | Client may receive old state ID which will lead to inconsistent reads | Minor | rbf | Chunyi Yang | Chunyi Yang | | YARN-11551 | RM format-conf-store should delete all the content of ZKConfigStore | Major | resourcemanager | Benjamin Teke | Benjamin Teke | | HDFS-17151 | EC: Fix wrong metadata in BlockInfoStriped after recovery | Major | erasure-coding | Shuyan Zhang | Shuyan Zhang | | HDFS-17093 | Fix block report lease issue to avoid missing some storages report. | Minor | namenode | Yanlei Yu | Yanlei Yu | | YARN-11552 | timeline endpoint: /clusters/{clusterid}/apps/{appid}/entity-types Error when using hdfs store | Major | timelineservice | Jiandan Yang | Jiandan Yang | | YARN-11554 | Fix TestRMFailover#testWebAppProxyInStandAloneMode Failed | Major | resourcemanager | Shilun Fan | Shilun Fan | | HDFS-17166 | RBF: Throwing NoNamenodesAvailableException for a long time, when failover | Major | rbf | Jian Zhang | Jian Zhang | | HDFS-16933 | A race in SerialNumberMap will cause wrong owner, group and XATTR | Major | namanode | ZanderXu | ZanderXu | | HDFS-17167 | Observer NameNode -observer startup option conflicts with -rollingUpgrade startup option | Minor | namenode | Danny Becker | Danny Becker | | HADOOP-18870 | CURATOR-599 change broke functionality introduced in HADOOP-18139 and HADOOP-18709 | Major | common | Ferenc Erdelyi | Ferenc Erdelyi | | HADOOP-18824 | ZKDelegationTokenSecretManager causes ArithmeticException due to improper numRetries value checking | Critical | common | ConfX | ConfX | | HDFS-17190 | EC: Fix bug of OIV processing XAttr. | Major | erasure-coding | Shuyan Zhang | Shuyan Zhang | | HDFS-17138 | RBF: We changed the hadoop.security.authtolocal configuration of one router, the other routers stopped working | Major | rbf | Xiping Zhang | Xiping Zhang | | HDFS-17105 | mistakenly purge editLogs even after it is empty in NNStorageRetentionManager | Minor | namanode | ConfX | ConfX | | HDFS-17198 | RBF: fix bug of getRepresentativeQuorum when records have same dateModified | Major | rbf | Jian Zhang | Jian Zhang | | YARN-11573 | Add config option to make container allocation prefer nodes without reserved containers | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-11558 | Fix dependency convergence error on hbase2 profile | Major | buid, yarn | Masatake Iwasaki | Masatake Iwasaki | | HADOOP-18912 | upgrade snappy-java to" }, { "data": "due to CVE | Major | build | PJ Fanning | PJ Fanning | | HDFS-17133 | TestFsDatasetImpl missing null check when cleaning up | Critical | test | ConfX | ConfX | | HDFS-17209 | Correct comments to align with the code | Trivial | datanode | Yu Wang | Yu Wang | | YARN-11578 | Fix performance issue of permission check in verifyAndCreateRemoteLogDir | Major | log-aggregation | Tamas Domok | Tamas Domok | | HADOOP-18922 | Race condition in ZKDelegationTokenSecretManager creating znode | Major | common | Kevin Risden | Kevin Risden | | HADOOP-18929 | Build failure while trying to create apache 3.3.7 release locally. | Critical | build | Mukund Thakur | PJ Fanning | | YARN-11590 | RM process stuck after calling confStore.format() when ZK SSL/TLS is enabled, as netty thread waits indefinitely | Major | resourcemanager | Ferenc Erdelyi | Ferenc Erdelyi | | HDFS-17220 | fix same available space policy in AvailableSpaceVolumeChoosingPolicy | Major | hdfs | Fei Guo | Fei Guo | | HADOOP-18941 | Modify HBase version in BUILDING.txt | Minor | common | Zepeng Zhang | Zepeng Zhang | | YARN-11595 | Fix hadoop-yarn-client#java.lang.NoClassDefFoundError | Major | yarn-client | Shilun Fan | Shilun Fan | | HDFS-17237 | Remove IPCLoggerChannel Metrics when the logger is closed | Major | namenode | Stephen ODonnell | Stephen ODonnell | | HDFS-17231 | HA: Safemode should exit when resources are from low to available | Major | ha | kuper | kuper | | HDFS-17024 | Potential data race introduced by HDFS-15865 | Major | dfsclient | Wei-Chiu Chuang | Segawa Hiroaki | | YARN-11597 | NPE when getting the static files in SLSWebApp | Major | scheduler-load-simulator | Junfan Zhang | Junfan Zhang | | HADOOP-18905 | Negative timeout in ZKFailovercontroller due to overflow | Major | common | ConfX | ConfX | | YARN-11584 | [CS] Attempting to create Leaf Queue with empty shortname should fail without crashing RM | Major | capacity scheduler | Brian Goerlitz | Brian Goerlitz | | HDFS-17246 | Fix shaded client for building Hadoop on Windows | Major | hdfs-client | Gautham Banasandra | Gautham Banasandra | | MAPREDUCE-7459 | Fixed TestHistoryViewerPrinter flakiness during string comparison | Minor | test | Rajiv Ramachandran | Rajiv Ramachandran | | YARN-11599 | Incorrect log4j properties file in SLS sample conf | Major | scheduler-load-simulator | Junfan Zhang | Junfan Zhang | | YARN-11608 | QueueCapacityVectorInfo NPE when accesible labels config is used | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HDFS-17249 | Fix TestDFSUtil.testIsValidName() unit test failure | Minor | test | liuguanghua | liuguanghua | | HADOOP-18969 | S3A: AbstractS3ACostTest to clear bucket fs.s3a.create.performance flag | Minor | fs/s3, test | Steve Loughran | Steve Loughran | | YARN-11616 | Fast fail when multiple attribute kvs are specified | Major | nodeattibute | Junfan Zhang | Junfan Zhang | | HDFS-17261 | RBF: Fix getFileInfo return wrong path when get mountTable path which multi-level | Minor | rbf | liuguanghua | liuguanghua | | HDFS-17271 | Web UI DN report shows random order when sorting with dead DNs | Minor | namenode, rbf, ui | Felix N | Felix N | | HDFS-17233 | The conf dfs.datanode.lifeline.interval.seconds is not considering time unit seconds | Major | datanode | Hemanth Boyina | Palakur Eshwitha Sai | | HDFS-17260 | Fix the logic for reconfigure slow peer enable for" }, { "data": "| Major | namanode | Zhaobo Huang | Zhaobo Huang | | HDFS-17232 | RBF: Fix NoNamenodesAvailableException for a long time, when use observer | Major | rbf | Jian Zhang | Jian Zhang | | HDFS-17270 | RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case | Major | rbf | lei w | lei w | | HDFS-17262 | Fixed the verbose log.warn in DFSUtil.addTransferRateMetric() | Major | logging | Bryan Beaudreault | Ravindra Dingankar | | HDFS-17265 | RBF: Throwing an exception prevents the permit from being released when using FairnessPolicyController | Major | rbf | Jian Zhang | Jian Zhang | | HDFS-17278 | Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module | Minor | nfs, test | Ruby | Ruby | | HADOOP-19011 | Possible ConcurrentModificationException if Exec command fails | Major | common | Attila Doroszlai | Attila Doroszlai | | MAPREDUCE-7463 | Fix missing comma in HistoryServerRest.html response body | Minor | documentation | wangzhongwei | wangzhongwei | | HDFS-17240 | Fix a typo in DataStorage.java | Trivial | datanode | Yu Wang | Yu Wang | | HDFS-17056 | EC: Fix verifyClusterSetup output in case of an invalid param. | Major | erasure-coding | Ayush Saxena | Zhaobo Huang | | HDFS-17298 | Fix NPE in DataNode.handleBadBlock and BlockSender | Major | datanode | Haiyang Hu | Haiyang Hu | | HDFS-17284 | EC: Fix int overflow in calculating numEcReplicatedTasks and numReplicationTasks during block recovery | Major | ec, namenode | Hualong Zhang | Hualong Zhang | | HADOOP-19010 | NullPointerException in Hadoop Credential Check CLI Command | Major | common | Anika Kelhanka | Anika Kelhanka | | HDFS-17182 | DataSetLockManager.lockLeakCheck() is not thread-safe. | Minor | datanode | liuguanghua | liuguanghua | | HDFS-17309 | RBF: Fix Router Safemode check contidition error | Major | rbf | liuguanghua | liuguanghua | | YARN-11646 | QueueCapacityConfigParser shouldnt ignore capacity config with 0 memory | Major | capacityscheduler | Tamas Domok | Tamas Domok | | HDFS-17290 | HDFS: add client rpc backoff metrics due to disconnection from lowest priority queue | Major | metrics | Lei Yang | Lei Yang | | HADOOP-18894 | upgrade sshd-core due to CVEs | Major | build, common | PJ Fanning | PJ Fanning | | YARN-11639 | ConcurrentModificationException and NPE in PriorityUtilizationQueueOrderingPolicy | Major | capacity scheduler | Ferenc Erdelyi | Ferenc Erdelyi | | HADOOP-19049 | Class loader leak caused by StatisticsDataReferenceCleaner thread | Major | common | Jia Fan | Jia Fan | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:|:|:--|:--|:--|:--| | YARN-10327 | Remove duplication of checking for invalid application ID in TestLogsCLI | Trivial | test | Hudky Mrton Gyula | Hudky Mrton Gyula | | MAPREDUCE-7280 | MiniMRYarnCluster has hard-coded timeout waiting to start history server, with no way to disable | Major | test | Nick Dimiduk | Masatake Iwasaki | | MAPREDUCE-7288 | Fix TestLongLong#testRightShift | Minor | test | Wanqiang Ji | Wanqiang Ji | | HDFS-15514 | Remove useless dfs.webhdfs.enabled | Minor | test | Hui Fei | Hui Fei | | HADOOP-17205 | Move personality file from Yetus to Hadoop repository | Major | test, yetus | Chao Sun | Chao Sun | | HDFS-15564 | Add Test annotation for TestPersistBlocks#testRestartDfsWithSync | Minor | hdfs | Hui Fei | Hui Fei | | HDFS-15559 | Complement initialize member variables in TestHdfsConfigFields#initializeMemberVariables | Minor | test | Lisheng Sun | Lisheng Sun | | HDFS-15576 | Erasure Coding: Add rs and rs-legacy codec test for addPolicies | Minor | erasure-coding, test | Hui Fei | Hui Fei | | YARN-9333 | TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes fails intermittently | Major | yarn | Prabhu Joseph | Peter Bacsko | | HDFS-15690 | Add lz4-java as hadoop-hdfs test dependency | Major | test | L. C. Hsieh | L." }, { "data": "Hsieh | | YARN-10520 | Deprecated the residual nested class for the LCEResourceHandler | Major | nodemanager | Wanqiang Ji | Wanqiang Ji | | HDFS-15898 | Test case TestOfflineImageViewer fails | Minor | test | Hui Fei | Hui Fei | | HDFS-15904 | Flaky test TestBalancer#testBalancerWithSortTopNodes() | Major | balancer & mover, test | Viraj Jasani | Viraj Jasani | | HDFS-16041 | TestErasureCodingCLI fails | Minor | test | Hui Fei | Hui Fei | | MAPREDUCE-7342 | Stop RMService in TestClientRedirect.testRedirect() | Minor | test | Zhengxi Li | Zhengxi Li | | MAPREDUCE-7311 | Fix non-idempotent test in TestTaskProgressReporter | Minor | test | Zhengxi Li | Zhengxi Li | | HDFS-16224 | testBalancerWithObserverWithFailedNode times out | Trivial | test | Leon Gao | Leon Gao | | HADOOP-17868 | Add more test for the BuiltInGzipCompressor | Major | test | L. C. Hsieh | L. C. Hsieh | | HADOOP-17936 | TestLocalFSCopyFromLocal.testDestinationFileIsToParentDirectory failure after reverting HADOOP-16878 | Major | test | Chao Sun | Chao Sun | | HDFS-15862 | Make TestViewfsWithNfs3.testNfsRenameSingleNN() idempotent | Minor | nfs | Zhengxi Li | Zhengxi Li | | YARN-6272 | TestAMRMClient#testAMRMClientWithContainerResourceChange fails intermittently | Major | yarn | Ray Chiang | Andras Gyori | | HADOOP-18089 | Test coverage for Async profiler servlets | Minor | common | Viraj Jasani | Viraj Jasani | | YARN-11081 | TestYarnConfigurationFields consistently keeps failing | Minor | test | Viraj Jasani | Viraj Jasani | | HDFS-16573 | Fix test TestDFSStripedInputStreamWithRandomECPolicy | Minor | test | daimin | daimin | | HDFS-16637 | TestHDFSCLI#testAll consistently failing | Major | test | Viraj Jasani | Viraj Jasani | | YARN-11248 | Add unit test for FINISHEDCONTAINERSPULLEDBYAM event on DECOMMISSIONING | Major | test | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16625 | Unit tests arent checking for PMDK availability | Major | test | Steve Vaughan | Steve Vaughan | | YARN-11388 | Prevent resource leaks in TestClientRMService. | Minor | test | Chris Nauroth | Chris Nauroth | | YARN-5607 | Document TestContainerResourceUsage#waitForContainerCompletion | Major | resourcemanager, test | Karthik Kambatla | Susheel Gupta | | HDFS-17010 | Add a subtree test to TestSnapshotDiffReport | Minor | test | Tsz-wo Sze | Tsz-wo Sze | | YARN-11526 | Add a unit test | Minor | client | Lu Yuan | Lu Yuan | | YARN-11621 | Fix intermittently failing unit test: TestAMRMProxy.testAMRMProxyTokenRenewal | Major | yarn | Susheel Gupta | Susheel Gupta | | HDFS-16904 | Close webhdfs during the teardown | Major | hdfs | Steve Vaughan | Steve Vaughan | | HDFS-17370 | Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf | Major | . | Takanobu Asanuma | Takanobu Asanuma | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:|:|:--|:--|:|:| | HADOOP-16169 | ABFS: Bug fix for getPathProperties | Major | fs/azure | Da Zhou | Da Zhou | | HDFS-15146 |" }, { "data": "testBalancerRPCDelayQpsDefault fails intermittently | Minor | balancer, test | Ahmed Hussein | Ahmed Hussein | | HDFS-15051 | RBF: Impose directory level permissions for Mount entries | Major | rbf | Xiaoqiao He | Xiaoqiao He | | YARN-10234 | FS-CS converter: dont enable auto-create queue property for root | Critical | fairscheduler | Peter Bacsko | Peter Bacsko | | YARN-10240 | Prevent Fatal CancelledException in TimelineV2Client when stopping | Major | ATSv2 | Tarun Parimi | Tarun Parimi | | HADOOP-17002 | ABFS: Avoid storage calls to check if the account is HNS enabled or not | Minor | fs/azure | Bilahari T H | Bilahari T H | | YARN-10159 | TimelineConnector does not destroy the jersey client | Major | ATSv2 | Prabhu Joseph | Tanu Ajmera | | YARN-10194 | YARN RMWebServices /scheduler-conf/validate leaks ZK Connections | Blocker | capacityscheduler | Akhil PB | Prabhu Joseph | | YARN-10215 | Endpoint for obtaining direct URL for the logs | Major | yarn | Adam Antal | Andras Gyori | | YARN-6973 | Adding RM Cluster Id in ApplicationReport | Major | applications, federation | Giovanni Matteo Fumarola | Bilwa S T | | YARN-6553 | Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests | Major | federation, router, test | Giovanni Matteo Fumarola | Bilwa S T | | HDFS-14353 | Erasure Coding: metrics xmitsInProgress become to negative. | Major | datanode, erasure-coding | Baolong Mao | Baolong Mao | | HDFS-15305 | Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable. | Major | fs, hadoop-client, hdfs-client, viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10257 | FS-CS converter: skip increment properties for mem/vcores and fix DRF check | Major | fairscheduler, fs-cs | Peter Bacsko | Peter Bacsko | | HADOOP-17027 | Add tests for reading fair call queue capacity weight configs | Major | ipc | Fengnan Li | Fengnan Li | | YARN-8942 | PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value | Minor | federation | Akshay Agarwal | Bilwa S T | | YARN-10259 | Reserved Containers not allocated from available space of other nodes in CandidateNodeSet in MultiNodePlacement | Major | capacityscheduler | Prabhu Joseph | Prabhu Joseph | | HDFS-15306 | Make mount-table to read from central place ( Lets say from HDFS) | Major | configuration, hadoop-client | Uma Maheswara Rao G | Uma Maheswara Rao G | | HDFS-15082 | RBF: Check each component length of destination path when add/update mount entry | Major | rbf | Xiaoqiao He | Xiaoqiao He | | HDFS-15340 | RBF: Implement BalanceProcedureScheduler basic framework | Major | rbf | Jinglun | Jinglun | | HDFS-15322 | Make NflyFS to work when ViewFsOverloadSchemes scheme and target uris schemes are same. | Major | fs, nflyFs, viewfs, viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10108 | FS-CS converter: nestedUserQueue with default rule results in invalid queue mapping | Major | capacity scheduler, fs-cs | Prabhu Joseph | Gergely Pollk | | HADOOP-17053 | ABFS: FS initialize fails for incompatible account-agnostic Token Provider setting | Major | fs/azure | Sneha Vijayarajan | Sneha Vijayarajan | | HDFS-15321 | Make DFSAdmin tool to work with ViewFSOverloadScheme | Major | dfsadmin, fs, viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10284 | Add lazy initialization of LogAggregationFileControllerFactory in LogServlet | Major | log-aggregation, yarn | Adam Antal | Adam Antal | | HDFS-15330 | Document the ViewFSOverloadScheme details in ViewFS guide | Major | viewfs, viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | HDFS-15389 | DFSAdmin should close filesystem and dfsadmin -setBalancerBandwidth should work with ViewFSOverloadScheme | Major | dfsadmin, viewfsOverloadScheme | Ayush Saxena | Ayush Saxena | | HDFS-15394 | Add all available fs.viewfs.overload.scheme.target.<scheme>.impl classes in core-default.xml" }, { "data": "| Major | configuration, viewfs, viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10293 | Reserved Containers not allocated from available space of other nodes in CandidateNodeSet in MultiNodePlacement (YARN-10259) | Major | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | HDFS-15387 | FSUsage$DF should consider ViewFSOverloadScheme in processPath | Minor | viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10292 | FS-CS converter: add an option to enable asynchronous scheduling in CapacityScheduler | Major | fairscheduler | Benjamin Teke | Benjamin Teke | | HDFS-15346 | FedBalance tool implementation | Major | rbf | Jinglun | Jinglun | | HADOOP-16888 | [JDK11] Support JDK11 in the precommit job | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17004 | ABFS: Improve the ABFS driver documentation | Minor | fs/azure | Bilahari T H | Bilahari T H | | HDFS-15418 | ViewFileSystemOverloadScheme should represent mount links as non symlinks | Major | hdfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | HADOOP-16922 | ABFS: Change in User-Agent header | Minor | fs/azure | Bilahari T H | Bilahari T H | | YARN-9930 | Support max running app logic for CapacityScheduler | Major | capacity scheduler, capacityscheduler | zhoukang | Peter Bacsko | | HDFS-15428 | Javadocs fails for hadoop-federation-balance | Minor | documentation | Xieming Li | Xieming Li | | HDFS-15427 | Merged ListStatus with Fallback target filesystem and InternalDirViewFS. | Major | viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10316 | FS-CS converter: convert maxAppsDefault, maxRunningApps settings | Major | fairscheduler, fs-cs | Peter Bacsko | Peter Bacsko | | HADOOP-17054 | ABFS: Fix idempotency test failures when SharedKey is set as AuthType | Major | fs/azure | Sneha Vijayarajan | Sneha Vijayarajan | | HADOOP-17015 | ABFS: Make PUT and POST operations idempotent | Major | fs/azure | Sneha Vijayarajan | Sneha Vijayarajan | | HDFS-15429 | mkdirs should work when parent dir is internalDir and fallback configured. | Major | hdfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-6526 | Refactoring SQLFederationStateStore by avoiding to recreate a connection at every call | Major | federation | Giovanni Matteo Fumarola | Bilwa S T | | HDFS-15436 | Default mount table name used by ViewFileSystem should be configurable | Major | viewfs, viewfsOverloadScheme | Virajith Jalaparti | Virajith Jalaparti | | HDFS-15410 | Add separated config file hdfs-fedbalance-default.xml for fedbalance tool | Major | rbf | Jinglun | Jinglun | | HDFS-15374 | Add documentation for fedbalance tool | Major | documentation, rbf | Jinglun | Jinglun | | YARN-10325 | Document max-parallel-apps for Capacity Scheduler | Major | capacity scheduler, capacityscheduler | Peter Bacsko | Peter Bacsko | | HADOOP-16961 | ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics) | Major | fs/azure | Gabor Bota | Mehakmeet Singh | | HDFS-15430 | create should work when parent dir is internalDir and fallback configured. | Major | hdfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | HDFS-15450 | Fix NN trash emptier to work if ViewFSOveroadScheme enabled | Major | namenode, viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | HADOOP-17111 | Replace Guava Optional with Java8+ Optional | Major | build, common | Ahmed Hussein | Ahmed Hussein | | HDFS-15417 | RBF: Get the datanode report from cache for federation WebHDFS operations | Major | federation, rbf, webhdfs | Ye Ni | Ye Ni | | HDFS-15449 | Optionally ignore port number in mount-table name when picking from initialized uri | Major | hdfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10337 | TestRMHATimelineCollectors fails on hadoop trunk | Major | test, yarn | Ahmed Hussein | Bilwa S T | | HDFS-15462 | Add fs.viewfs.overload.scheme.target.ofs.impl to" }, { "data": "| Major | configuration, viewfs, viewfsOverloadScheme | Siyao Meng | Siyao Meng | | HDFS-15464 | ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links | Major | viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | HADOOP-17101 | Replace Guava Function with Java8+ Function | Major | build | Ahmed Hussein | Ahmed Hussein | | HADOOP-17099 | Replace Guava Predicate with Java8+ Predicate | Minor | build | Ahmed Hussein | Ahmed Hussein | | HDFS-15479 | Ordered snapshot deletion: make it a configurable feature | Major | snapshots | Tsz-wo Sze | Tsz-wo Sze | | HDFS-15478 | When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target" }, { "data": "| Major | hdfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | HADOOP-17100 | Replace Guava Supplier with Java8+ Supplier in Hadoop | Major | build | Ahmed Hussein | Ahmed Hussein | | HDFS-15480 | Ordered snapshot deletion: record snapshot deletion in XAttr | Major | snapshots | Tsz-wo Sze | Shashikant Banerjee | | HADOOP-17132 | ABFS: Fix For Idempotency code | Major | fs/azure | Sneha Vijayarajan | Sneha Vijayarajan | | YARN-10315 | Avoid sending RMNodeResourceupdate event if resource is same | Major | graceful | Bibin Chundatt | Sushil Ks | | HADOOP-13221 | s3a create() doesnt check for an ancestor path being a file | Major | fs/s3 | Steve Loughran | Sean Mackrory | | HDFS-15488 | Add a command to list all snapshots for a snaphottable root with snapshot Ids | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HDFS-15481 | Ordered snapshot deletion: garbage collect deleted snapshots | Major | snapshots | Tsz-wo Sze | Tsz-wo Sze | | HDFS-15498 | Show snapshots deletion status in snapList cmd | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HADOOP-17091 | [JDK11] Fix Javadoc errors | Major | build | Uma Maheswara Rao G | Akira Ajisaka | | YARN-10229 | [Federation] Client should be able to submit application to RM directly using normal client conf | Major | amrmproxy, federation | JohnsonGuo | Bilwa S T | | HDFS-15497 | Make snapshot limit on global as well per snapshot root directory configurable | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HADOOP-17131 | Refactor S3A Listing code for better isolation | Major | fs/s3 | Mukund Thakur | Mukund Thakur | | HADOOP-17179 | [JDK 11] Fix javadoc error in Java API link detection | Major | build | Akira Ajisaka | Akira Ajisaka | | HADOOP-17137 | ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic | Minor | fs/azure, test | Sneha Vijayarajan | Bilahari T H | | HADOOP-17149 | ABFS: Test failure: testFailedRequestWhenCredentialsNotCorrect fails when run with SharedKey | Minor | fs/azure | Sneha Vijayarajan | Bilahari T H | | HADOOP-17163 | ABFS: Add debug log for rename failures | Major | fs/azure | Bilahari T H | Bilahari T H | | HDFS-15492 | Make trash root inside each snapshottable directory | Major | hdfs, hdfs-client | Siyao Meng | Siyao Meng | | HDFS-15518 | Wrong operation name in FsNamesystem for listSnapshots | Major | snapshots | Mukul Kumar Singh | Aryan Gupta | | HDFS-15496 | Add UI for deleted snapshots | Major | snapshots | Mukul Kumar Singh | Vivek Ratnavel Subramanian | | HDFS-15524 | Add edit log entry for Snapshot deletion GC thread snapshot deletion | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HDFS-15483 | Ordered snapshot deletion: Disallow rename between two snapshottable directories | Major | snapshots | Tsz-wo Sze | Shashikant Banerjee | | HDFS-15525 | Make trash root inside each snapshottable directory for WebHDFS | Major | webhdfs | Siyao Meng | Siyao Meng | | HDFS-15533 | Provide DFS API compatible class(ViewDistributedFileSystem), but use ViewFileSystemOverloadScheme inside | Major | dfs, viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10360 | Support Multi Node Placement in SingleConstraintAppPlacementAllocator | Major | capacityscheduler, multi-node-placement | Prabhu Joseph | Prabhu Joseph | | YARN-10106 | Yarn logs CLI filtering by application attempt | Trivial | yarn | Adam Antal | Hudky Mrton Gyula | | YARN-10304 | Create an endpoint for remote application log directory path query | Minor | yarn | Andras Gyori | Andras Gyori | | YARN-1806 | webUI update to allow end users to request thread dump | Major | nodemanager | Ming Ma | Siddharth Ahuja | | HDFS-15500 | In-order deletion of snapshots: Diff lists must be update only in the last snapshot | Major | snapshots | Mukul Kumar Singh | Tsz-wo Sze | | HDFS-15531 | Namenode UI: List snapshots in separate table for each snapshottable directory | Major | ui | Vivek Ratnavel Subramanian | Vivek Ratnavel Subramanian | | YARN-10408 | Extract MockQueueHierarchyBuilder to a separate class | Major | resourcemanager, test | Gergely Pollk | Gergely Pollk | | YARN-10409 | Improve MockQueueHierarchyBuilder to detect queue ambiguity | Major | resourcemanager, test | Gergely Pollk | Gergely Pollk | | YARN-10371 | Create variable context class for CS queue mapping rules | Major | yarn | Gergely Pollk | Gergely Pollk | | YARN-10373 | Create Matchers for CS mapping rules | Major | yarn | Gergely Pollk | Gergely Pollk | | HDFS-15542 | Add identified snapshot corruption tests for ordered snapshot deletion | Major | snapshots, test | Shashikant Banerjee | Shashikant Banerjee | | YARN-10386 | Create new JSON schema for Placement Rules | Major | capacity scheduler, capacityscheduler | Peter Bacsko | Peter Bacsko | | YARN-10374 | Create Actions for CS mapping rules | Major | yarn | Gergely Pollk | Gergely Pollk | | YARN-10372 | Create MappingRule class to represent each CS mapping rule | Major | yarn | Gergely Pollk | Gergely Pollk | | YARN-10375 | CS Mapping rule config parser should return MappingRule objects | Major | yarn | Gergely Pollk | Gergely Pollk | | HDFS-15529 | getChildFilesystems should include fallback fs as well | Critical | viewfs, viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10376 | Create a class that covers the functionality of UserGroupMappingPlacementRule and AppNameMappingPlacementRule using the new mapping rules | Major | yarn | Gergely Pollk | Gergely Pollk | | YARN-10332 | RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state | Minor | resourcemanager | yehuanhuan | yehuanhuan | | YARN-10411 | Create an allowCreate flag for MappingRuleAction | Major | resourcemanager, scheduler | Gergely Pollk | Gergely Pollk | | HDFS-15558 | ViewDistributedFileSystem#recoverLease should call" }, { "data": "when there are no mounts configured | Major | viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10415 | Create a group matcher which checks ALL groups of the user | Major | resourcemanager, scheduler | Gergely Pollk | Gergely Pollk | | HADOOP-17181 | Handle transient stream read failures in FileSystem contract tests | Minor | fs/s3 | Steve Loughran | Steve Loughran | | YARN-10387 | Implement logic which returns MappingRule objects based on mapping rules | Major | capacity scheduler, resourcemanager | Peter Bacsko | Peter Bacsko | | HDFS-15563 | Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir | Major | snapshots | Nilotpal Nandi | Siyao Meng | | HDFS-15551 | Tiny Improve for DeadNode detector | Minor | hdfs-client | dark_num | imbajin | | HDFS-15555 | RBF: Refresh cacheNS when SocketException occurs | Major | rbf | Akira Ajisaka | Akira Ajisaka | | HDFS-15532 | listFiles on root/InternalDir will fail if fallback root has file | Major | viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | HDFS-15539 | When disallowing snapshot on a dir, throw exception if its trash root is not empty | Major | hdfs | Siyao Meng | Siyao Meng | | HDFS-15568 | namenode start failed to start when dfs.namenode.snapshot.max.limit set | Major | snapshots | Nilotpal Nandi | Shashikant Banerjee | | HDFS-15578 | Fix the rename issues with fallback fs enabled | Major | viewfs, viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | HDFS-15585 | ViewDFS#getDelegationToken should not throw UnsupportedOperationException. | Major | viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10424 | Adapt existing AppName and UserGroupMapping unittests to ensure backwards compatibility | Major | resourcemanager, test | Benjamin Teke | Benjamin Teke | | HADOOP-17215 | ABFS: Support for conditional overwrite | Major | fs/azure | Sneha Vijayarajan | Sneha Vijayarajan | | HDFS-14811 | RBF: TestRouterRpc#testErasureCoding is flaky | Major | rbf | Chen Zhang | Chen Zhang | | HADOOP-17279 | ABFS: Test testNegativeScenariosForCreateOverwriteDisabled fails for non-HNS account | Major | fs/azure | Sneha Vijayarajan | Sneha Vijayarajan | | HDFS-15590 | namenode fails to start when ordered snapshot deletion feature is disabled | Major | snapshots | Nilotpal Nandi | Shashikant Banerjee | | HDFS-15596 | ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only. | Major | hdfs-client | Uma Maheswara Rao G | Uma Maheswara Rao G | | HDFS-15598 | ViewHDFS#canonicalizeUri should not be restricted to DFS only API. | Major | viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10413 | Change fs2cs to generate mapping rules in the new format | Major | fs-cs, scheduler | Peter Bacsko | Peter Bacsko | | HDFS-15607 | Create trash dir when allowing snapshottable dir | Major | hdfs | Siyao Meng | Siyao Meng | | HDFS-15613 | RBF: Router FSCK fails after HDFS-14442 | Major | rbf | Akira Ajisaka | Akira Ajisaka | | HDFS-15611 | Add list Snapshot command in WebHDFS | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HADOOP-17281 | Implement" }, { "data": "in S3AFileSystem | Major | fs/s3 | Mukund Thakur | Mukund Thakur | | HDFS-13293 | RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer | Major | rbf | Baolong Mao | Hui Fei | | HDFS-15625 | Namenode trashEmptier should not init ViewFs on startup | Major | namenode, viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | YARN-10454 | Add applicationName policy | Major | capacity scheduler, resourcemanager | Peter Bacsko | Peter Bacsko | | HDFS-15620 | RBF: Fix test failures after HADOOP-17281 | Major | rbf, test | Akira Ajisaka | Akira Ajisaka | | HDFS-15614 | Initialize snapshot trash root during NameNode startup if enabled | Major | namanode, snapshots | Siyao Meng | Siyao Meng | | HADOOP-16915 | ABFS: Test failure ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance | Major | fs/azure, test | Bilahari T H | Bilahari T H | | HADOOP-17301 | ABFS: read-ahead error reporting breaks buffer management | Critical | fs/azure | Sneha Vijayarajan | Sneha Vijayarajan | | HADOOP-17288 | Use shaded guava from thirdparty | Major | common, hadoop-thirdparty | Ayush Saxena | Ayush Saxena | | HADOOP-17175 | [JDK11] Fix javadoc errors in hadoop-common module | Major | documentation | Akira Ajisaka | Akira Ajisaka | | HADOOP-17319 | Update the checkstyle config to ban some guava functions | Major | build | Akira Ajisaka | Akira Ajisaka | | HDFS-15630 | RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations. | Major | rbf | Chengwei Wang | Chengwei Wang | | HDFS-15459 | TestBlockTokenWithDFSStriped fails intermittently | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | HDFS-15640 | Add diff threshold to FedBalance | Major | rbf | Jinglun | Jinglun | | HDFS-15461 | TestDFSClientRetries#testGetFileChecksum fails intermittently | Major | dfsclient, test | Ahmed Hussein | Ahmed Hussein | | HDFS-9776 | TestHAAppend#testMultipleAppendsDuringCatchupTailing is flaky | Major | test | Vinayakumar B | Ahmed Hussein | | HDFS-15457 | TestFsDatasetImpl fails intermittently | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | HDFS-15460 | TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently | Major | hdfs, test | Ahmed Hussein | Ahmed Hussein | | HDFS-15657 | RBF: TestRouter#testNamenodeHeartBeatEnableDefault fails by BindException | Major | rbf, test | Akira Ajisaka | Akira Ajisaka | | HDFS-15654 | TestBPOfferService#testMissBlocksWhenReregister fails intermittently | Major | datanode | Ahmed Hussein | Ahmed Hussein | | YARN-10420 | Update CS MappingRule documentation with the new format and features | Major | capacity scheduler, documentation | Gergely Pollk | Peter Bacsko | | HDFS-15643 | EC: Fix checksum computation in case of native encoders | Blocker | erasure-coding | Ahmed Hussein | Ayush Saxena | | HDFS-15548 | Allow configuring DISK/ARCHIVE storage types on same device mount | Major | datanode | Leon Gao | Leon Gao | | HADOOP-17344 | Harmonize guava version and shade guava in yarn-csi | Major | common | Wei-Chiu Chuang | Akira Ajisaka | | YARN-10425 | Replace the legacy placement engine in CS with the new one | Major | capacity scheduler, resourcemanager | Gergely Pollk | Gergely Pollk | | HDFS-15674 | TestBPOfferService#testMissBlocksWhenReregister fails on trunk | Major | datanode, test | Ahmed Hussein | Masatake Iwasaki | | YARN-10486 | FS-CS converter: handle case when weight=0 and allow more lenient capacity checks in Capacity Scheduler | Major | yarn | Peter Bacsko | Peter Bacsko | | YARN-10457 | Add a configuration switch to change between legacy and JSON placement rule format | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | HDFS-15635 | ViewFileSystemOverloadScheme support specifying mount table loader imp through conf | Major | viewfsOverloadScheme | Junfan Zhang | Junfan Zhang | | HADOOP-17394 | [JDK 11] mvn package -Pdocs fails | Major | build, documentation | Akira Ajisaka | Akira Ajisaka | | HDFS-15677" }, { "data": "TestRouterRpcMultiDestination#testGetCachedDatanodeReport fails on trunk | Major | rbf, test | Ahmed Hussein | Masatake Iwasaki | | HDFS-15689 | allow/disallowSnapshot on EZ roots shouldnt fail due to trash provisioning/emptiness check | Major | hdfs | Siyao Meng | Siyao Meng | | HDFS-15716 | TestUpgradeDomainBlockPlacementPolicy flaky | Major | namenode, test | Ahmed Hussein | Ahmed Hussein | | YARN-10380 | Import logic of multi-node allocation in CapacityScheduler | Critical | capacity scheduler | Wangda Tan | Qi Zhu | | YARN-10031 | Create a general purpose log request with additional query parameters | Major | yarn | Adam Antal | Andras Gyori | | YARN-10526 | RMAppManager CS Placement ignores parent path | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | YARN-10463 | For Federation, we should support getApplicationAttemptReport. | Major | federation, router | Qi Zhu | Qi Zhu | | HDFS-15308 | TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently | Major | erasure-coding | Toshihiko Uchida | Hemanth Boyina | | HDFS-15648 | TestFileChecksum should be parameterized | Major | test | Ahmed Hussein | Masatake Iwasaki | | HDFS-15748 | RBF: Move the router related part from hadoop-federation-balance module to hadoop-hdfs-rbf. | Major | rbf | Jinglun | Jinglun | | HDFS-15766 | RBF: MockResolver.getMountPoints() breaks the semantic of FileSubclusterResolver. | Major | rbf | Jinglun | Jinglun | | YARN-10507 | Add the capability to fs2cs to write the converted placement rules inside capacity-scheduler.xml | Major | capacity scheduler | Peter Bacsko | Peter Bacsko | | HADOOP-15348 | S3A Input Stream bytes read counter isnt getting through to StorageStatistics/instrumentation properly | Minor | fs/s3 | Steve Loughran | Steve Loughran | | HDFS-15702 | Fix intermittent falilure of TestDecommission#testAllocAndIBRWhileDecommission | Minor | hdfs, test | Masatake Iwasaki | Masatake Iwasaki | | YARN-10504 | Implement weight mode in Capacity Scheduler | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10570 | Remove experimental warning message from fs2cs | Major | scheduler | Peter Bacsko | Peter Bacsko | | YARN-10563 | Fix dependency exclusion problem in poms | Critical | buid, resourcemanager | Peter Bacsko | Peter Bacsko | | HDFS-14558 | RBF: Isolation/Fairness documentation | Major | rbf | CR Hota | Fengnan Li | | HDFS-15762 | TestMultipleNNPortQOP#testMultipleNNPortOverwriteDownStream fails intermittently | Minor | hdfs, test | Toshihiko Uchida | Toshihiko Uchida | | YARN-10525 | Add weight mode conversion to fs2cs | Major | capacity scheduler, fs-cs | Qi Zhu | Peter Bacsko | | HDFS-15672 | TestBalancerWithMultipleNameNodes#testBalancingBlockpoolsWithBlockPoolPolicy fails on trunk | Major | balancer, test | Ahmed Hussein | Masatake Iwasaki | | YARN-10506 | Update queue creation logic to use weight mode and allow the flexible static/dynamic creation | Major | capacity scheduler, resourcemanager | Benjamin Teke | Andras Gyori | | HDFS-15549 | Use Hardlink to move replica between DISK and ARCHIVE storage if on same filesystem mount | Major | datanode | Leon Gao | Leon Gao | | YARN-10574 | Fix the FindBugs warning introduced in YARN-10506 | Major | capacity scheduler, resourcemanager | Gergely Pollk | Gergely Pollk | | YARN-10535 | Make queue placement in CapacityScheduler compliant with auto-queue-placement | Major | capacity scheduler | Wangda Tan | Gergely Pollk | | YARN-10573 | Enhance placement rule conversion in fs2cs in weight mode and enable it by default | Major | capacity scheduler | Peter Bacsko | Peter Bacsko | | YARN-10512 | CS Flexible Auto Queue Creation: Modify RM /scheduler endpoint to include mode of operation for CS | Major | capacity scheduler | Benjamin Teke | Szilard Nemeth | | YARN-10578 | Fix Auto Queue Creation parent handling | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10579 | CS Flexible Auto Queue Creation: Modify RM /scheduler endpoint to include weight values for queues | Major | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | HDFS-15767 | RBF: Router federation rename of" }, { "data": "| Major | rbf | Jinglun | Jinglun | | YARN-10596 | Allow static definition of childless ParentQueues with auto-queue-creation-v2 enabled | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10531 | Be able to disable user limit factor for CapacityScheduler Leaf Queue | Major | capacity scheduler | Wangda Tan | Qi Zhu | | YARN-10587 | Fix AutoCreateLeafQueueCreation cap related caculation when in absolute mode. | Major | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10598 | CS Flexible Auto Queue Creation: Modify RM /scheduler endpoint to extend the creation type with additional information | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10599 | fs2cs should generate new auto-queue-creation-v2.enabled properties for all parents | Major | resourcemanager | Peter Bacsko | Peter Bacsko | | YARN-10600 | Convert root queue in fs2cs weight mode conversion | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HADOOP-17424 | Replace HTrace with No-Op tracer | Major | common | Siyao Meng | Siyao Meng | | YARN-10604 | Support auto queue creation without mapping rules | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10605 | Add queue-mappings-override.enable property in FS2CS conversions | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10585 | Create a class which can convert from legacy mapping rule format to the new JSON format | Major | resourcemanager, scheduler | Gergely Pollk | Gergely Pollk | | YARN-10352 | Skip schedule on not heartbeated nodes in Multi Node Placement | Major | scheduler | Prabhu Joseph | Prabhu Joseph | | YARN-10612 | Fix findbugs issue introduced in YARN-10585 | Major | scheduler | Gergely Pollk | Gergely Pollk | | HADOOP-17432 | [JDK 16] KerberosUtil#getOidInstance is broken by JEP 396 | Major | auth | Akira Ajisaka | Akira Ajisaka | | YARN-10615 | Fix Auto Queue Creation hierarchy construction to use queue path instead of short queue name | Critical | yarn | Andras Gyori | Andras Gyori | | HDFS-15820 | Ensure snapshot root trash provisioning happens only post safe mode exit | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HDFS-15683 | Allow configuring DISK/ARCHIVE capacity for individual volumes | Major | datanode | Leon Gao | Leon Gao | | HDFS-15817 | Rename snapshots while marking them deleted | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | HDFS-15818 | Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig | Minor | test | Leon Gao | Leon Gao | | YARN-10619 | CS Mapping Rule %specified rule catches default submissions | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | YARN-10620 | fs2cs: parentQueue for certain placement rules are not set during conversion | Major | capacity scheduler | Peter Bacsko | Peter Bacsko | | HADOOP-13327 | Add OutputStream + Syncable to the Filesystem Specification | Major | fs | Steve Loughran | Steve Loughran | | YARN-10624 | Support max queues limit configuration in new auto created queue, consistent with old auto" }, { "data": "| Major | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10622 | Fix preemption policy to exclude childless ParentQueues | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-15836 | RBF: Fix contract tests after HADOOP-13327 | Major | rbf | Akira Ajisaka | Akira Ajisaka | | HADOOP-17038 | Support disabling buffered reads in ABFS positional reads | Major | fs/azure | Anoop Sam John | Anoop Sam John | | HADOOP-17109 | add guava BaseEncoding to illegalClasses | Major | build, common | Ahmed Hussein | Ahmed Hussein | | HDFS-15834 | Remove the usage of org.apache.log4j.Level | Major | hdfs-common | Akira Ajisaka | Akira Ajisaka | | YARN-10635 | CSMapping rule can return paths with empty parts | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | YARN-10636 | CS Auto Queue creation should reject submissions with empty path parts | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | YARN-10513 | CS Flexible Auto Queue Creation RM UIv2 modifications | Major | capacity scheduler, resourcemanager, ui | Benjamin Teke | Andras Gyori | | HDFS-15845 | RBF: Router fails to start due to NoClassDefFoundError for hadoop-federation-balance | Major | rbf | Takanobu Asanuma | Takanobu Asanuma | | HDFS-15847 | create client protocol: add ecPolicyName & storagePolicy param to debug statement string | Minor | erasure-coding, namanode | Bhavik Patel | Bhavik Patel | | HADOOP-16748 | Migrate to Python 3 and upgrade Yetus to 0.13.0 | Major | common | Akira Ajisaka | Akira Ajisaka | | HDFS-15781 | Add metrics for how blocks are moved in replaceBlock | Minor | datanode | Leon Gao | Leon Gao | | YARN-10609 | Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue) | Major | documentation | Qi Zhu | Qi Zhu | | YARN-10627 | Extend logging to give more information about weight mode | Major | yarn | Benjamin Teke | Benjamin Teke | | YARN-10655 | Limit queue creation depth relative to its first static parent | Major | yarn | Andras Gyori | Andras Gyori | | YARN-10532 | Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used | Major | capacity scheduler | Wangda Tan | Qi Zhu | | YARN-10639 | Queueinfo related capacity, should adjusted to weight mode. | Major | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10640 | Adjust the queue Configured capacity to Configured weight number for weight mode in UI. | Major | capacity scheduler, ui | Qi Zhu | Qi Zhu | | HDFS-15848 | Snapshot Operations: Add debug logs at the entry point | Minor | snapshots | Bhavik Patel | Bhavik Patel | | YARN-10412 | Move CS placement rule related changes to a separate package | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | HADOOP-17548 | ABFS: Toggle Store Mkdirs request overwrite parameter | Major | fs/azure | Sumangala Patki | Sumangala Patki | | YARN-10689 | Fix the findbugs issues in extractFloatValueFromWeightConfig. | Major | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10686 | Fix TestCapacitySchedulerAutoQueueCreation#testAutoQueueCreationFailsForEmptyPathWithAQCAndWeightMode | Major | capacity scheduler | Qi Zhu | Qi Zhu | | HDFS-15890 | Improve the Logs for File Concat Operation | Minor | namenode | Bhavik Patel | Bhavik Patel | | HDFS-13975 | TestBalancer#testMaxIterationTime fails sporadically | Major | balancer, test | Jason Darrell Lowe | Toshihiko Uchida | | YARN-10688 | ClusterMetrics should support GPU capacity related metrics. | Major | metrics, resourcemanager | Qi Zhu | Qi Zhu | | YARN-10659 | Improve CS MappingRule %secondary_group evaluation | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | YARN-10692 | Add Node GPU Utilization and apply to NodeMetrics. | Major | gpu | Qi Zhu | Qi Zhu | | YARN-10641 | Refactor the max app related update, and fix maxApplications update error when add new" }, { "data": "| Critical | capacity scheduler | Qi Zhu | Qi Zhu | | YARN-10674 | fs2cs should generate auto-created queue deletion properties | Major | scheduler | Qi Zhu | Qi Zhu | | HDFS-15902 | Improve the log for HTTPFS server operation | Minor | httpfs | Bhavik Patel | Bhavik Patel | | YARN-10713 | ClusterMetrics should support custom resource capacity related metrics. | Major | metrics | Qi Zhu | Qi Zhu | | YARN-10120 | In Federation Router Nodes/Applications/About pages throws 500 exception when https is enabled | Critical | federation | Sushanta Sen | Bilwa S T | | YARN-10597 | CSMappingPlacementRule should not create new instance of Groups | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | HDFS-15921 | Improve the log for the Storage Policy Operations | Minor | namenode | Bhavik Patel | Bhavik Patel | | YARN-9618 | NodesListManager event improvement | Critical | resourcemanager | Bibin Chundatt | Qi Zhu | | HDFS-15940 | Some tests in TestBlockRecovery are consistently failing | Major | test | Viraj Jasani | Viraj Jasani | | YARN-10714 | Remove dangling dynamic queues on reinitialization | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10564 | Support Auto Queue Creation template configurations | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10702 | Add cluster metric for amount of CPU used by RM Event Processor | Minor | yarn | Jim Brennan | Jim Brennan | | YARN-10503 | Support queue capacity in terms of absolute resources with custom resourceType. | Critical | gpu | Qi Zhu | Qi Zhu | | HADOOP-17630 | [JDK 15] TestPrintableString fails due to Unicode 13.0 support | Major | test | Akira Ajisaka | Akira Ajisaka | | HADOOP-17524 | Remove EventCounter and Log counters from JVM Metrics | Major | common | Akira Ajisaka | Viraj Jasani | | HADOOP-17576 | ABFS: Disable throttling update for auth failures | Major | fs/azure | Sumangala Patki | Sumangala Patki | | YARN-10723 | Change CS nodes page in UI to support custom resource. | Major | resourcemanager | Qi Zhu | Qi Zhu | | HADOOP-16948 | ABFS: Support infinite lease dirs | Minor | common | Billie Rinaldi | Billie Rinaldi | | YARN-10654 | Dots . in CSMappingRule path variables should be replaced | Major | capacity scheduler | Gergely Pollk | Peter Bacsko | | HADOOP-17112 | whitespace not allowed in paths when saving files to s3a via committer | Blocker | fs/s3 | Krzysztof Adamski | Krzysztof Adamski | | YARN-10637 | fs2cs: add queue autorefresh policy during conversion | Major | fairscheduler, fs-cs | Qi Zhu | Qi Zhu | | HADOOP-17661 | mvn versions:set fails to parse pom.xml | Blocker | build | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-15961 | standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories | Major | snapshots | Nilotpal Nandi | Shashikant Banerjee | | YARN-10739 | GenericEventHandler.printEventQueueDetails causes RM recovery to take too much time | Critical | resourcemanager | Zhanqi Cai | Qi Zhu | | HADOOP-11245 | Update NFS gateway to use Netty4 | Major | nfs | Brandon Li | Wei-Chiu Chuang | | YARN-10707 | Support custom resources in ResourceUtilization, and update Node GPU Utilization to use. | Major | gpu, yarn | Qi Zhu | Qi Zhu | | HADOOP-17653 | Do not use guavas Files.createTempDir() | Major | common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-15952" }, { "data": "TestRouterRpcMultiDestination#testProxyGetTransactionID and testProxyVersionRequest are flaky | Major | rbf | Harunobu Daikoku | Akira Ajisaka | | HDFS-15923 | RBF: Authentication failed when rename accross sub clusters | Major | rbf | zhuobin zheng | zhuobin zheng | | HADOOP-17644 | Add back the exceptions removed by HADOOP-17432 for compatibility | Blocker | build | Akira Ajisaka | Quan Li | | HDFS-15997 | Implement dfsadmin -provisionSnapshotTrash -all | Major | dfsadmin | Siyao Meng | Siyao Meng | | YARN-10642 | Race condition: AsyncDispatcher can get stuck by the changes introduced in YARN-8995 | Critical | resourcemanager | Chenyu Zheng | Chenyu Zheng | | YARN-9615 | Add dispatcher metrics to RM | Major | metrics, resourcemanager | Jonathan Hung | Qi Zhu | | YARN-10571 | Refactor dynamic queue handling logic | Minor | capacity scheduler | Andras Gyori | Andras Gyori | | HADOOP-17685 | Fix junit deprecation warnings in hadoop-common module | Major | test | Akira Ajisaka | Akira Ajisaka | | YARN-10761 | Add more event type to RM Dispatcher event metrics. | Major | resourcemanager | Qi Zhu | Qi Zhu | | HADOOP-17665 | Ignore missing keystore configuration in reloading mechanism | Major | common | Borislav Iordanov | Borislav Iordanov | | HADOOP-17663 | Remove useless property hadoop.assemblies.version in pom file | Trivial | build | Wei-Chiu Chuang | Akira Ajisaka | | HADOOP-17115 | Replace Guava Sets usage by Hadoops own Sets in hadoop-common and hadoop-tools | Major | common | Ahmed Hussein | Viraj Jasani | | HADOOP-17666 | Update LICENSE for 3.3.1 | Blocker | common | Wei-Chiu Chuang | Wei-Chiu Chuang | | YARN-10771 | Add cluster metric for size of SchedulerEventQueue and RMEventQueue | Major | metrics, resourcemanager | chaosju | chaosju | | HADOOP-17722 | Replace Guava Sets usage by Hadoops own Sets in hadoop-mapreduce-project | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17720 | Replace Guava Sets usage by Hadoops own Sets in hadoop-hdfs-project | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17721 | Replace Guava Sets usage by Hadoops own Sets in hadoop-yarn-project | Major | common | Viraj Jasani | Viraj Jasani | | YARN-10783 | Allow definition of auto queue template properties in root | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10782 | Extend /scheduler endpoint with template properties | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-15973 | RBF: Add permission check before doing router federation rename. | Major | rbf | Jinglun | Jinglun | | HADOOP-17152 | Implement wrapper for guava newArrayList and newLinkedList | Major | common | Ahmed Hussein | Viraj Jasani | | YARN-10807 | Parents node labels are incorrectly added to child queues in weight mode | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10801 | Fix Auto Queue template to properly set all configuration properties | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10780 | Optimise retrieval of configured node labels in CS queues | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-15659 | Set dfs.namenode.redundancy.considerLoad to false in MiniDFSCluster | Major | test | Akira Ajisaka | Ahmed Hussein | | HADOOP-17331 | [JDK 15] TestDNS fails by UncheckedIOException | Major | test | Akira Ajisaka | Akira Ajisaka | | HDFS-15671" }, { "data": "TestBalancerRPCDelay#testBalancerRPCDelayQpsDefault fails on Trunk | Major | balancer, test | Ahmed Hussein | Ahmed Hussein | | HADOOP-17596 | ABFS: Change default Readahead Queue Depth from num(processors) to const | Major | fs/azure | Sumangala Patki | Sumangala Patki | | HADOOP-17715 | ABFS: Append blob tests with non HNS accounts fail | Minor | fs/azure | Sneha Varma | Sneha Varma | | HADOOP-17714 | ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs | Minor | test | Sneha Varma | Sneha Varma | | HADOOP-17795 | Provide fallbacks for callqueue.impl and scheduler.impl | Major | ipc | Viraj Jasani | Viraj Jasani | | HADOOP-16272 | Update HikariCP to 4.0.3 | Major | build, common | Yuming Wang | Viraj Jasani | | HDFS-16067 | Support Append API in NNThroughputBenchmark | Minor | namanode | Renukaprasad C | Renukaprasad C | | YARN-10657 | We should make max application per queue to support node label. | Major | capacity scheduler | Qi Zhu | Andras Gyori | | YARN-10829 | Support getApplications API in FederationClientInterceptor | Major | federation, router | Akshat Bordia | Akshat Bordia | | HDFS-16140 | TestBootstrapAliasmap fails by BindException | Major | test | Akira Ajisaka | Akira Ajisaka | | YARN-10727 | ParentQueue does not validate the queue on removal | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10790 | CS Flexible AQC: Add separate parent and leaf template property. | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HADOOP-17814 | Provide fallbacks for identity/cost providers and backoff enable | Major | ipc | Viraj Jasani | Viraj Jasani | | YARN-10841 | Fix token reset synchronization for UAM response token | Minor | federation | Minni Mittal | Minni Mittal | | YARN-10838 | Implement an optimised version of Configuration getPropsWithPrefix | Major | capacity scheduler | Andras Gyori | Andras Gyori | | HDFS-16184 | De-flake TestBlockScanner#testSkipRecentAccessFile | Major | test | Viraj Jasani | Viraj Jasani | | HDFS-16143 | TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky | Major | test | Akira Ajisaka | Viraj Jasani | | HDFS-16192 | ViewDistributedFileSystem#rename wrongly using src in the place of dst. | Major | viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | HADOOP-17156 | Clear abfs readahead requests on stream close | Major | fs/azure | Rajesh Balamohan | Mukund Thakur | | YARN-10576 | Update Capacity Scheduler documentation with JSON-based placement mapping | Major | capacity scheduler, documentation | Peter Bacsko | Benjamin Teke | | YARN-10522 | Document for Flexible Auto Queue Creation in Capacity Scheduler | Major | capacity scheduler | Qi Zhu | Benjamin Teke | | YARN-10646 | TestCapacitySchedulerWeightMode test descriptor comments doesnt reflect the correct scenario | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10919 | Remove LeafQueue#scheduler field | Minor | capacity scheduler | Szilard Nemeth | Benjamin Teke | | YARN-10893 | Add metrics for getClusterMetrics and getApplications APIs in FederationClientInterceptor | Major | federation, metrics, router | Akshat Bordia | Akshat Bordia | | YARN-10914 | Simplify duplicated code for tracking ResourceUsage in AbstractCSQueue | Minor | capacity scheduler | Szilard Nemeth | Tamas Domok | | YARN-10910 | AbstractCSQueue#setupQueueConfigs: Separate validation logic from initialization logic | Minor | capacity scheduler | Szilard Nemeth | Benjamin Teke | | YARN-10852 | Optimise CSConfiguration getAllUserWeightsForQueue | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10872 | Replace getPropsWithPrefix calls in AutoCreatedQueueTemplate | Major | capacity scheduler | Andras Gyori | Benjamin Teke | | YARN-10912 | AbstractCSQueue#updateConfigurableResourceRequirement: Separate validation logic from initialization logic | Minor | capacity scheduler | Szilard Nemeth | Tamas Domok | | YARN-10917 | Investigate and" }, { "data": "CapacitySchedulerConfigValidator#validateQueueHierarchy | Minor | capacity scheduler | Szilard Nemeth | Tamas Domok | | YARN-10915 | AbstractCSQueue: Simplify complex logic in methods: deriveCapacityFromAbsoluteConfigurations and updateEffectiveResources | Minor | capacity scheduler | Szilard Nemeth | Benjamin Teke | | HDFS-16218 | RBF: Use HdfsConfiguration for passing in Router principal | Major | rbf | Akira Ajisaka | Fengnan Li | | HDFS-16217 | RBF: Set default value of hdfs.fedbalance.procedure.scheduler.journal.uri by adding appropriate config resources | Major | rbf | Akira Ajisaka | Viraj Jasani | | HDFS-16227 | testMoverWithStripedFile fails intermittently | Major | test | Viraj Jasani | Viraj Jasani | | YARN-10913 | AbstractCSQueue: Group preemption methods and fields into a separate class | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-10950 | Code cleanup in QueueCapacities | Minor | capacity scheduler | Szilard Nemeth | Adam Antal | | YARN-10911 | AbstractCSQueue: Create a separate class for usernames and weights that are travelling in a Map | Minor | capacity scheduler, test | Szilard Nemeth | Szilard Nemeth | | HDFS-16213 | Flaky test TestFsDatasetImpl#testDnRestartWithHardLink | Major | test | Viraj Jasani | Viraj Jasani | | YARN-10897 | Introduce QueuePath class | Major | resourcemanager, yarn | Andras Gyori | Andras Gyori | | YARN-10961 | TestCapacityScheduler: reuse appHelper where feasible | Major | capacity scheduler, test | Tamas Domok | Tamas Domok | | HDFS-16219 | RBF: Set default map tasks and bandwidth in RouterFederationRename | Major | rbf | Akira Ajisaka | Viraj Jasani | | HADOOP-17910 | [JDK 17] TestNetUtils fails | Major | common | Akira Ajisaka | Viraj Jasani | | HDFS-16231 |" }, { "data": "TestDataNodeMetrics#testReceivePacketSlowMetrics | Major | datanode, metrics | Haiyang Hu | Haiyang Hu | | YARN-10957 | Use invokeConcurrent Overload with Collection in getClusterMetrics | Major | federation, router | Akshat Bordia | Akshat Bordia | | YARN-10960 | Extract test queues and related methods from TestCapacityScheduler | Major | capacity scheduler, test | Tamas Domok | Tamas Domok | | HADOOP-17929 | implement non-guava Precondition checkArgument | Major | command | Ahmed Hussein | Ahmed Hussein | | HDFS-16222 | Fix ViewDFS with mount points for HDFS only API | Major | viewfs | Ayush Saxena | Ayush Saxena | | HADOOP-17198 | Support S3 Access Points | Major | fs/s3 | Steve Loughran | Bogdan Stolojan | | HADOOP-17951 | AccessPoint verifyBucketExistsV2 always returns false | Trivial | fs/s3 | Bogdan Stolojan | Bogdan Stolojan | | HADOOP-17947 | Provide alternative to Guava VisibleForTesting | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17930 | implement non-guava Precondition checkState | Major | common | Ahmed Hussein | Ahmed Hussein | | HADOOP-17952 | Replace Guava VisibleForTesting by Hadoops own annotation in hadoop-common-project modules | Major | common | Viraj Jasani | Viraj Jasani | | YARN-10962 | Do not extend from CapacitySchedulerTestBase when not needed | Major | capacity scheduler, test | Tamas Domok | Tamas Domok | | HADOOP-17957 | Replace Guava VisibleForTesting by Hadoops own annotation in hadoop-hdfs-project modules | Major | build, common | Viraj Jasani | Viraj Jasani | | HADOOP-17959 | Replace Guava VisibleForTesting by Hadoops own annotation in hadoop-cloud-storage-project and hadoop-mapreduce-project modules | Major | build, common | Viraj Jasani | Viraj Jasani | | HADOOP-17962 | Replace Guava VisibleForTesting by Hadoops own annotation in hadoop-tools modules | Major | build, common | Viraj Jasani | Viraj Jasani | | HADOOP-17963 | Replace Guava VisibleForTesting by Hadoops own annotation in hadoop-yarn-project modules | Major | build, common | Viraj Jasani | Viraj Jasani | | HADOOP-17123 | remove guava Preconditions from Hadoop-common-project modules | Major | common | Ahmed Hussein | Ahmed Hussein | | HDFS-16276 | RBF: Remove the useless configuration of rpc isolation in md | Minor | documentation, rbf | Xiangyi Zhu | Xiangyi Zhu | | YARN-10942 | Move AbstractCSQueue fields to separate objects that are tracking usage | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-10954 | Remove commented code block from CSQueueUtils#loadCapacitiesByLabelsFromConf | Trivial | capacity scheduler | Szilard Nemeth | Andras Gyori | | YARN-10949 | Simplify AbstractCSQueue#updateMaxAppRelatedField and find a more meaningful name for this method | Minor | capacity scheduler | Szilard Nemeth | Andras Gyori | | YARN-10958 | Use correct configuration for Group service init in CSMappingPlacementRule | Major | capacity scheduler | Peter Bacsko | Szilard Nemeth | | YARN-10916 | Simplify GuaranteedOrZeroCapacityOverTimePolicy#computeQueueManagementChanges | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-10948 | Rename SchedulerQueue#activeQueue to activateQueue | Minor | capacity scheduler | Szilard Nemeth | Adam Antal | | YARN-10930 | Introduce universal configured capacity vector | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-10909 | AbstractCSQueue: Annotate all methods with VisibleForTesting that are only used by test code | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | HADOOP-17970 | unguava: remove Preconditions from hdfs-projects module | Major | common | Ahmed Hussein | Ahmed Hussein | | YARN-10924 | Clean up CapacityScheduler#initScheduler | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-10904 | Remove unnecessary fields from AbstractCSQueue or group fields by feature if possible | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | YARN-10985 | Add some tests to verify ACL behaviour in CapacitySchedulerConfiguration | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | HADOOP-17374 | AliyunOSS: support ListObjectsV2 | Major | fs/oss | wujinhu | wujinhu | | YARN-10998 | Add YARNROUTERHEAPSIZE to yarn-env for routers | Minor | federation, router | Minni Mittal | Minni Mittal | | HADOOP-18018 | unguava: remove Preconditions from hadoop-tools modules | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-18017 | unguava: remove Preconditions from hadoop-yarn-project modules | Major | common | Viraj Jasani | Viraj Jasani | | YARN-11003 | Make RMNode aware of all (OContainer inclusive) allocated resources | Minor | container, resourcemanager | Andrew Chung | Andrew Chung | | HDFS-16336 | De-flake TestRollingUpgrade#testRollback | Minor | hdfs, test | Kevin Wikant | Viraj Jasani | | HDFS-16171 | De-flake testDecommissionStatus | Major | test | Viraj Jasani | Viraj Jasani | | HADOOP-18022 | Add restrict-imports-enforcer-rule for Guava Preconditions in hadoop-main pom | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-18025 | Upgrade HBase version to" }, { "data": "for hbase1 profile | Major | build | Viraj Jasani | Viraj Jasani | | YARN-11031 | Improve the maintainability of RM webapp tests like TestRMWebServicesCapacitySched | Major | capacity scheduler | Tamas Domok | Tamas Domok | | YARN-11038 | Fix testQueueSubmitWithACL* tests in TestAppManager | Major | yarn | Tamas Domok | Tamas Domok | | YARN-11005 | Implement the core QUEUELENGTHTHEN_RESOURCES OContainer allocation policy | Minor | resourcemanager | Andrew Chung | Andrew Chung | | YARN-10982 | Replace all occurences of queuePath with the new QueuePath class | Major | capacity scheduler | Andras Gyori | Tibor Kovcs | | HADOOP-18039 | Upgrade hbase2 version and fix TestTimelineWriterHBaseDown | Major | build | Viraj Jasani | Viraj Jasani | | YARN-11024 | Create an AbstractLeafQueue to store the common LeafQueue + AutoCreatedLeafQueue functionality | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10907 | Minimize usages of AbstractCSQueue#csContext | Major | capacity scheduler | Szilard Nemeth | Benjamin Teke | | YARN-10929 | Do not use a separate config in legacy CS AQC | Minor | capacity scheduler | Szilard Nemeth | Benjamin Teke | | YARN-11043 | Clean up checkstyle warnings from YARN-11024/10907/10929 | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10963 | Split TestCapacityScheduler by test categories | Major | capacity scheduler | Tamas Domok | Tamas Domok | | YARN-10951 | CapacityScheduler: Move all fields and initializer code that belongs to async scheduling to a new class | Minor | capacity scheduler | Szilard Nemeth | Szilard Nemeth | | HADOOP-16908 | Prune Jackson 1 from the codebase and restrict its usage for future | Major | common | Wei-Chiu Chuang | Viraj Jasani | | HDFS-16168 | TestHDFSFileSystemContract#testAppend fails | Major | test | Hui Fei | secfree | | YARN-8859 | Add audit logs for router service | Major | router | Bibin Chundatt | Minni Mittal | | YARN-10632 | Make auto queue creation maximum allowed depth configurable | Major | capacity scheduler | Qi Zhu | Andras Gyori | | YARN-11034 | Add enhanced headroom in AllocateResponse | Major | federation | Minni Mittal | Minni Mittal | | HDFS-16429 | Add DataSetLockManager to manage fine-grain locks for FsDataSetImpl | Major | hdfs | Mingxiang Li | Mingxiang Li | | HDFS-16169 | Fix TestBlockTokenWithDFSStriped#testEnd2End failure | Major | test | Hui Fei | secfree | | YARN-10995 | Move PendingApplicationComparator from GuaranteedOrZeroCapacityOverTimePolicy | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | YARN-10947 | Simplify AbstractCSQueue#initializeQueueState | Minor | capacity scheduler | Szilard Nemeth | Andras Gyori | | YARN-10944 | AbstractCSQueue: Eliminate code duplication in overloaded versions of setMaxCapacity | Minor | capacity scheduler | Szilard Nemeth | Andras Gyori | | YARN-10590 | Consider legacy auto queue creation absolute resource template to avoid rounding errors | Major | capacity scheduler | Qi Zhu | Andras Gyori | | HDFS-16458 | [SPS]: Fix bug for unit test of reconfiguring SPS mode | Major | sps, test | Tao Li | Tao Li | | YARN-10983 | Follow-up changes for YARN-10904 | Minor | capacityscheduler | Szilard Nemeth | Benjamin Teke | | YARN-10945 | Add javadoc to all methods of AbstractCSQueue | Major | capacity scheduler, documentation | Szilard Nemeth | Andrs Gyri | | YARN-10918 | Simplify" }, { "data": "CapacitySchedulerQueueManager#parseQueue | Minor | capacity scheduler | Szilard Nemeth | Andras Gyori | | HADOOP-17526 | Use Slf4jRequestLog for HttpRequestLog | Major | common | Akira Ajisaka | Duo Zhang | | YARN-11036 | Do not inherit from TestRMWebServicesCapacitySched | Major | capacity scheduler, test | Tamas Domok | Tamas Domok | | YARN-10049 | FIFOOrderingPolicy Improvements | Major | scheduler | Manikandan R | Benjamin Teke | | HDFS-16499 | [SPS]: Should not start indefinitely while another SPS process is running | Major | sps | Tao Li | Tao Li | | YARN-10565 | Follow-up to YARN-10504 | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HDFS-13248 | RBF: Namenode need to choose block location for the client | Major | rbf | Wu Weiwei | Owen OMalley | | HDFS-15987 | Improve oiv tool to parse fsimage file in parallel with delimited format | Major | tools | Hongbing Wang | Hongbing Wang | | HADOOP-13386 | Upgrade Avro to 1.9.2 | Major | build | Java Developer | PJ Fanning | | HDFS-16477 | [SPS]: Add metric PendingSPSPaths for getting the number of paths to be processed by SPS | Major | sps | Tao Li | Tao Li | | HADOOP-18180 | Remove use of scala jar twitter util-core with java futures in S3A prefetching stream | Major | fs/s3 | PJ Fanning | PJ Fanning | | HDFS-16460 | [SPS]: Handle failure retries for moving tasks | Major | sps | Tao Li | Tao Li | | HDFS-16484 | [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread | Major | sps | qinyuren | qinyuren | | HDFS-16526 | Add metrics for slow DataNode | Major | datanode, metrics | Renukaprasad C | Renukaprasad C | | HDFS-16255 | RBF: Fix dead link to fedbalance document | Minor | documentation | Akira Ajisaka | Ashutosh Gupta | | HDFS-16488 | [SPS]: Expose metrics to JMX for external SPS | Major | metrics, sps | Tao Li | Tao Li | | HADOOP-18177 | document use and architecture design of prefetching s3a input stream | Major | documentation, fs/s3 | Steve Loughran | Ahmar Suhail | | HADOOP-17682 | ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters | Major | fs/azure | Sumangala Patki | Sumangala Patki | | HADOOP-15983 | Use jersey-json that is built to use jackson2 | Major | build | Akira Ajisaka | PJ Fanning | | YARN-11130 | RouterClientRMService Has Unused import | Minor | federation, router | Shilun Fan | Shilun Fan | | YARN-11122 | Support getClusterNodes API in FederationClientInterceptor | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18229 | Fix Hadoop Common Java Doc Errors | Major | build, common | Shilun Fan | Shilun Fan | | YARN-10465 | Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels APIs for Federation | Major | federation | D M Murali Krishna Reddy | Shilun Fan | | YARN-11137 | Improve log message in FederationClientInterceptor | Minor | federation | Shilun Fan | Shilun Fan | | HDFS-15878 | RBF: Fix TestRouterWebHDFSContractCreate#testSyncable | Major | hdfs, rbf | Renukaprasad C | Hanley Yang | | YARN-10487 | Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTypeInfo APIs for Federation | Major | federation, router | D M Murali Krishna Reddy | Shilun Fan | | YARN-11159 | Support failApplicationAttempt, updateApplicationPriority, updateApplicationTimeouts APIs for Federation | Major | federation | Shilun Fan | Shilun Fan | | HDFS-16598 | Fix DataNode FsDatasetImpl lock issue without GS checks. | Major | datanode | ZanderXu | ZanderXu | | HADOOP-18289 | Remove WhiteBox in hadoop-kms module. | Minor | common | Shilun Fan | Shilun Fan | | HDFS-16600 | Fix deadlock of fine-grain lock for FsDatastImpl of DataNode. | Major | datanode | ZanderXu | ZanderXu | | YARN-10122 | Support signalToContainer API for Federation | Major | federation, yarn | Sushanta Sen | Shilun Fan | | HADOOP-18266 | Replace with HashSet/TreeSet constructor in Hadoop-common-project | Trivial | common | Samrat Deb | Samrat Deb | | YARN-9874 | Remove unnecessary LevelDb write call in LeveldbConfigurationStore#confirmMutation | Minor | capacityscheduler | Prabhu Joseph | Ashutosh Gupta | | HDFS-16256 | Minor fixes in HDFS Fedbalance document | Minor | documentation | Akira Ajisaka | Ashutosh Gupta | | YARN-9822 | TimelineCollectorWebService#putEntities blocked when ATSV2 HBase is" }, { "data": "| Major | ATSv2 | Prabhu Joseph | Ashutosh Gupta | | YARN-10287 | Update scheduler-conf corrupts the CS configuration when removing queue which is referred in queue mapping | Major | capacity scheduler | Akhil PB | Ashutosh Gupta | | YARN-9403 | GET /apps/{appid}/entities/YARN_APPLICATION accesses application table instead of entity table | Major | ATSv2 | Prabhu Joseph | Ashutosh Gupta | | HDFS-16283 | RBF: improve renewLease() to call only a specific NameNode rather than make fan-out calls | Major | rbf | Aihua Xu | ZanderXu | | HADOOP-18231 | tests in ITestS3AInputStreamPerformance are failing | Minor | fs/s3 | Ahmar Suhail | Ahmar Suhail | | HADOOP-18254 | Add in configuration option to enable prefetching | Minor | fs/s3 | Ahmar Suhail | Ahmar Suhail | | YARN-11160 | Support getResourceProfiles, getResourceProfile APIs for Federation | Major | federation | Shilun Fan | Shilun Fan | | YARN-8900 | [Router] Federation: routing getContainers REST invocations transparently to multiple RMs | Major | federation, router | Giovanni Matteo Fumarola | Shilun Fan | | HDFS-15079 | RBF: Client maybe get an unexpected result with network anomaly | Critical | rbf | Hui Fei | ZanderXu | | YARN-11203 | Fix typo in hadoop-yarn-server-router module | Minor | federation | Shilun Fan | Shilun Fan | | YARN-11161 | Support getAttributesToNodes, getClusterNodeAttributes, getNodesToAttributes APIs for Federation | Major | federation | Shilun Fan | Shilun Fan | | YARN-10883 | [Router] Router Audit Log Add Client IP Address. | Major | federation, router | chaosju | Shilun Fan | | HADOOP-18190 | Collect IOStatistics during S3A prefetching | Major | fs/s3 | Steve Loughran | Ahmar Suhail | | HADOOP-18344 | AWS SDK update to 1.12.262 to address jackson CVE-2018-7489 and AWS CVE-2022-31159 | Major | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11212 | [Federation] Add getNodeToLabels REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-11180 | Refactor some code of getNewApplication, submitApplication, forceKillApplication, getApplicationReport | Minor | federation | Shilun Fan | Shilun Fan | | YARN-11220 | [Federation] Add getLabelsToNodes, getClusterNodeLabels, getLabelsOnNode REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-8973 | [Router] Add missing methods in RMWebProtocol | Major | federation, router | Giovanni Matteo Fumarola | Shilun Fan | | YARN-6972 | Adding RM ClusterId in AppInfo | Major | federation | Giovanni Matteo Fumarola | Tanuj Nayak | | YARN-11230 | [Federation] Add getContainer, signalToContainer REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-11235 | [RESERVATION] Refactor Policy Code and Define getReservationHomeSubcluster | Major | federation | Shilun Fan | Shilun Fan | | YARN-10793 | Upgrade Junit from 4 to 5 in hadoop-yarn-server-applicationhistoryservice | Major | test | ANANDA G B | Ashutosh Gupta | | YARN-11227 | [Federation] Add getAppTimeout, getAppTimeouts, updateApplicationTimeout REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | HDFS-13274 | RBF: Extend RouterRpcClient to use multiple sockets | Major | rbf | igo Goiri | igo Goiri | | YARN-6539 | Create SecureLogin inside Router | Minor | federation, router | Giovanni Matteo Fumarola | Xie YiFan | | YARN-11148 | In federation and security mode, nm recover may" }, { "data": "| Major | nodemanager | Chenyu Zheng | Chenyu Zheng | | YARN-11236 | [RESERVATION] Implement FederationReservationHomeSubClusterStore With MemoryStore | Major | federation | Shilun Fan | Shilun Fan | | YARN-11223 | [Federation] Add getAppPriority, updateApplicationPriority REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-11224 | [Federation] Add getAppQueue, updateAppQueue REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-11252 | [RESERVATION] Yarn Federation Router Supports Update / Delete Reservation in MemoryStore | Major | federation | Shilun Fan | Shilun Fan | | YARN-11269 | Upgrade JUnit from 4 to 5 in hadoop-yarn-server-timeline-pluginstorage | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | YARN-11250 | Capture the Performance Metrics of ZookeeperFederationStateStore | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18380 | fs.s3a.prefetch.block.size to be read through longBytesOption | Major | fs/s3 | Steve Loughran | Viraj Jasani | | YARN-11219 | [Federation] Add getAppActivities, getAppStatistics REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-8482 | [Router] Add cache for fast answers to getApps | Major | federation, router | Giovanni Matteo Fumarola | Shilun Fan | | YARN-11275 | [Federation] Add batchFinishApplicationMaster in UAMPoolManager | Major | federation, nodemanager | Shilun Fan | Shilun Fan | | YARN-11245 | Upgrade JUnit from 4 to 5 in hadoop-yarn-csi | Major | yarn-csi | Ashutosh Gupta | Ashutosh Gupta | | YARN-11272 | [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With Zk | Major | federation, reservation system | Shilun Fan | Shilun Fan | | HADOOP-18339 | S3A storage class option only picked up when buffering writes to disk | Major | fs/s3 | Steve Loughran | Monthon Klongklaew | | YARN-11177 | Support getNewReservation, submitReservation, updateReservation, deleteReservation APIs for Federation | Major | federation | Shilun Fan | Shilun Fan | | YARN-6667 | Handle containerId duplicate without failing the heartbeat in Federation Interceptor | Minor | federation, router | Botong Huang | Shilun Fan | | YARN-11284 | [Federation] Improve UnmanagedAMPoolManager WithoutBlock ServiceStop | Major | federation | Shilun Fan | Shilun Fan | | YARN-11273 | [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18410 | S3AInputStream.unbuffer() async drain not releasing http connections | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11298 | Improve Yarn Router Junit Test Close MockRM | Major | federation | Shilun Fan | Shilun Fan | | YARN-11297 | Improve Yarn Router Reservation Submission Code | Major | federation | Shilun Fan | Shilun Fan | | HDFS-13522 | HDFS-13522: Add federated nameservices states to client protocol and propagate it between routers and clients. | Major | federation, namenode | Erik Krogen | Simbarashe Dzinamarira | | YARN-11265 | Upgrade JUnit from 4 to 5 in hadoop-yarn-server-sharedcachemanager | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18302 | Remove WhiteBox in hadoop-common module. | Minor | common | Shilun Fan | Shilun Fan | | YARN-11261 | Upgrade JUnit from 4 to 5 in hadoop-yarn-server-web-proxy | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16767 | RBF: Support observer node from Router-Based Federation | Major | rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-18186 | s3a prefetching to use SemaphoredDelegatingExecutor for submitting work | Major | fs/s3 | Steve Loughran | Viraj Jasani | | YARN-11293 | [Federation] Router Support DelegationToken storeNewMasterKey/removeStoredMasterKey With MemoryStateStore | Major | federation | Shilun Fan | Shilun Fan | | YARN-11283 | [Federation] Fix Typo of NodeManager" }, { "data": "| Minor | federation, nodemanager | Shilun Fan | Shilun Fan | | HADOOP-18377 | hadoop-aws maven build to add a prefetch profile to run all tests with prefetching | Major | fs/s3, test | Steve Loughran | Viraj Jasani | | YARN-11307 | Fix Yarn Router Broken Link | Minor | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18455 | s3a prefetching Executor should be closed | Major | fs/s3 | Viraj Jasani | Viraj Jasani | | YARN-11270 | Upgrade JUnit from 4 to 5 in hadoop-yarn-server-timelineservice-hbase-client | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | YARN-11271 | Upgrade JUnit from 4 to 5 in hadoop-yarn-server-timelineservice-hbase-common | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | YARN-11316 | [Federation] Fix Yarn federation.md table format | Major | federation | Shilun Fan | Shilun Fan | | YARN-11308 | Router Page display the db username and password in mask mode | Major | federation | Shilun Fan | Shilun Fan | | YARN-11310 | [Federation] Refactoring Routers Federation Web Page | Major | federation | Shilun Fan | Shilun Fan | | YARN-11238 | Optimizing FederationClientInterceptor Call with Parallelism | Major | federation | Shilun Fan | Shilun Fan | | YARN-11318 | Improve FederationInterceptorREST#createInterceptorForSubCluster Use WebAppUtils | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11324 | [Federation] Fix some PBImpl classes to avoid NPE. | Major | federation, router, yarn | Shilun Fan | Shilun Fan | | HADOOP-18382 | Upgrade AWS SDK to V2 - Prerequisites | Minor | fs/s3 | Ahmar Suhail | Ahmar Suhail | | YARN-11313 | [Federation] Add SQLServer Script and Supported DB Version in Federation.md | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18378 | Implement readFully(long position, byte[] buffer, int offset, int length) | Minor | fs/s3 | Ahmar Suhail | Alessandro Passaro | | YARN-11260 | Upgrade JUnit from 4 to 5 in hadoop-yarn-server-timelineservice | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | HDFS-16783 | Remove the redundant lock in deepCopyReplica and getFinalizedBlocks | Major | datanode | ZanderXu | ZanderXu | | HDFS-16787 | Remove the redundant lock in DataSetLockManager#removeLock. | Major | datanode | ZanderXu | ZanderXu | | HADOOP-18480 | upgrade AWS SDK to 1.12.316 | Major | build, fs/s3 | Steve Loughran | Steve Loughran | | YARN-11315 | [Federation] YARN Federation Router Supports Cross-Origin. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11317 | [Federation] Refactoring Yarn Routers About Web Page. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11334 | [Federation] Improve SubClusterState#fromString parameter and LogMessage | Trivial | federation | Shilun Fan | Shilun Fan | | YARN-11327 | [Federation] Refactoring Yarn Routers Node Web Page. | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11294 | [Federation] Router Support DelegationToken storeNewToken/updateStoredToken/removeStoredToken With MemoryStateStore | Major | federation | Shilun Fan | Shilun Fan | | YARN-8041 | [Router] Federation: Improve Router REST API Metrics | Minor | federation, router | YR | Shilun Fan | | YARN-11247 | Remove unused classes introduced by YARN-9615 | Minor | resourcemanager | Shilun Fan | Shilun Fan | | HADOOP-18304 | Improve S3A committers documentation clarity | Trivial | documentation | Daniel Carl Jones | Daniel Carl Jones | | HADOOP-18189 | S3PrefetchingInputStream to support status probes when closed | Minor | fs/s3 | Steve Loughran | Viraj Jasani | | HADOOP-18465 | S3A server-side encryption tests fail before checking encryption tests should skip | Minor | fs/s3, test | Daniel Carl Jones | Daniel Carl Jones | | YARN-11342 | [Federation]" }, { "data": "FederationClientInterceptor#submitApplication Use FederationActionRetry | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11295 | [Federation] Router Support DelegationToken in MemoryStore mode | Major | federation | Shilun Fan | Shilun Fan | | YARN-11345 | [Federation] Refactoring Yarn Routers Application Web Page. | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11336 | Upgrade Junit 4 to 5 in hadoop-yarn-applications-catalog-webapp | Major | test | Ashutosh Gupta | Ashutosh Gupta | | YARN-11338 | Upgrade Junit 4 to 5 in hadoop-yarn-applications-unmanaged-am-launcher | Major | test | Ashutosh Gupta | Ashutosh Gupta | | YARN-11229 | [Federation] Add checkUserAccessToQueue REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-11332 | [Federation] Improve FederationClientInterceptor#ThreadPool thread pool configuration. | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11337 | Upgrade Junit 4 to 5 in hadoop-yarn-applications-mawo | Major | test | Ashutosh Gupta | Ashutosh Gupta | | YARN-11339 | Upgrade Junit 4 to 5 in hadoop-yarn-services-api | Major | test | Ashutosh Gupta | Ashutosh Gupta | | YARN-11264 | Upgrade JUnit from 4 to 5 in hadoop-yarn-server-tests | Major | test, yarn | Ashutosh Gupta | Ashutosh Gupta | | HADOOP-18482 | ITestS3APrefetchingInputStream does not skip if no CSV test file available | Minor | fs/s3 | Daniel Carl Jones | Daniel Carl Jones | | YARN-11366 | Improve equals, hashCode(), toString() methods of the Federation Base Object | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11354 | [Federation] Add Yarn Routers NodeLabel Web Page. | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11368 | [Federation] Improve Yarn Routers Federation Page style. | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-15327 | Upgrade MR ShuffleHandler to use Netty4 | Major | common | Xiaoyu Yao | Szilard Nemeth | | HDFS-16785 | Avoid to hold write lock to improve performance when add volume. | Major | datanode | ZanderXu | ZanderXu | | YARN-11359 | [Federation] Routing admin invocations transparently to multiple RMs. | Major | federation, router | Shilun Fan | Shilun Fan | | MAPREDUCE-7422 | Upgrade Junit 4 to 5 in hadoop-mapreduce-examples | Major | test | Ashutosh Gupta | Ashutosh Gupta | | YARN-6946 | Upgrade JUnit from 4 to 5 in hadoop-yarn-common | Major | test | Akira Ajisaka | Ashutosh Gupta | | YARN-11371 | [Federation] Refactor FederationInterceptorREST#createNewApplication\\submitApplication Use FederationActionRetry | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18531 | assertion failure in ITestS3APrefetchingInputStream | Major | fs/s3, test | Steve Loughran | Ashutosh Gupta | | HADOOP-18457 | ABFS: Support for account level throttling | Major | fs/azure | Anmol Asrani | Anmol Asrani | | YARN-10946 | AbstractCSQueue: Create separate class for constructing Queue API objects | Minor | capacity scheduler | Szilard Nemeth | Peter Szucs | | YARN-11158 | Support getDelegationToken, renewDelegationToken, cancelDelegationToken APIs for Federation | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18560 | AvroFSInput opens a stream twice and discards the second one without closing | Blocker | fs | Steve Loughran | Steve Loughran | | YARN-11373 | [Federation] Support refreshQueuesrefreshNodes APIs for Federation. | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11350 | [Federation] Router Support DelegationToken With ZK | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11358 | [Federation] Add FederationInterceptor#allow-partial-result" }, { "data": "| Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18526 | Leak of S3AInstrumentation instances via hadoop Metrics references | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18546 | disable purging list of in progress reads in abfs stream closed | Blocker | fs/azure | Steve Loughran | Pranav Saxena | | HADOOP-18577 | ABFS: add probes of readahead fix | Major | fs/azure | Steve Loughran | Steve Loughran | | YARN-11226 | [Federation] Add createNewReservation, submitReservation, updateReservation, deleteReservation REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-11225 | [Federation] Add postDelegationToken, postDelegationTokenExpiration, cancelDelegationToken REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18320 | Improve S3A delegations token documentation | Minor | fs/s3 | Ahmar Suhail | Ahmar Suhail | | YARN-11374 | [Federation] Support refreshSuperUserGroupsConfigurationrefreshUserToGroupsMappings APIs for Federation | Major | federation, router | Shilun Fan | Shilun Fan | | MAPREDUCE-7417 | Upgrade Junit 4 to 5 in hadoop-mapreduce-client-uploader | Major | test | Ashutosh Gupta | Ashutosh Gupta | | MAPREDUCE-7413 | Upgrade Junit 4 to 5 in hadoop-mapreduce-client-hs-plugins | Major | test | Ashutosh Gupta | Ashutosh Gupta | | YARN-11320 | [Federation] Add getSchedulerInfo REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-6971 | Clean up different ways to create resources | Minor | resourcemanager, scheduler | Yufei Gu | Riya Khandelwal | | YARN-10965 | Centralize queue resource calculation based on CapacityVectors | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-11218 | [Federation] Add getActivities, getBulkActivities REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18246 | Remove lower limit on s3a prefetching/caching block size | Minor | fs/s3 | Daniel Carl Jones | Ankit Saurabh | | YARN-11217 | [Federation] Add dumpSchedulerLogs REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18620 | Avoid using grizzly-http-* APIs | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-18206 | Cleanup the commons-logging references in the code base | Major | common | Duo Zhang | Viraj Jasani | | HADOOP-18630 | Add gh-pages in" }, { "data": "to deploy the current trunk doc | Major | common | Ayush Saxena | Simhadri Govindappa | | YARN-3657 | Federation maintenance mechanisms (simple CLI and command propagation) | Major | nodemanager, resourcemanager | Carlo Curino | Shilun Fan | | YARN-6572 | Refactoring Router services to use common util classes for pipeline creations | Major | federation | Giovanni Matteo Fumarola | Shilun Fan | | HADOOP-18351 | S3A prefetching: Error logging during reads | Minor | fs/s3 | Ahmar Suhail | Ankit Saurabh | | YARN-11349 | [Federation] Router Support DelegationToken With SQL | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11228 | [Federation] Add getAppAttempts, getAppAttempt REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-5604 | Add versioning for FederationStateStore | Major | federation, router | Subramaniam Krishnan | Shilun Fan | | YARN-11222 | [Federation] Add addToClusterNodeLabels, removeFromClusterNodeLabels REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | YARN-11289 | [Federation] Improve NM FederationInterceptor removeAppFromRegistry | Major | federation, nodemanager | Shilun Fan | Shilun Fan | | YARN-11221 | [Federation] Add replaceLabelsOnNodes, replaceLabelsOnNode REST APIs for Router | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18642 | Cut excess dependencies from hadoop-azure, hadoop-aliyun transitive imports; fix LICENSE-binary | Blocker | build, fs/azure, fs/oss | Steve Loughran | Steve Loughran | | YARN-11375 | [Federation] Support refreshAdminAclsrefreshServiceAcls APIs for Federation | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18648 | Avoid loading kms log4j properties dynamically by KMSWebServer | Major | kms | Viraj Jasani | Viraj Jasani | | YARN-8972 | [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size | Major | federation, router | Giovanni Matteo Fumarola | Shilun Fan | | HADOOP-18653 | LogLevel servlet to determine log impl before using setLevel | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-18649 | CLA and CRLA appenders to be replaced with RFA | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-18654 | Remove unused custom appender TaskLogAppender | Major | common | Viraj Jasani | Viraj Jasani | | YARN-11445 | [Federation] Add getClusterInfo, getClusterUserInfo REST APIs for Router. | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18631 | Migrate Async appenders to log4j properties | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-18669 | Remove Log4Json Layout | Major | common | Viraj Jasani | Viraj Jasani | | YARN-11376 | [Federation] Support updateNodeResourcerefreshNodesResources APIs for Federation. | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18606 | Add reason in in x-ms-client-request-id on a retry API call. | Minor | fs/azure | Pranav Saxena | Pranav Saxena | | HADOOP-18146 | ABFS: Add changes for expect hundred continue header with append requests | Major | fs/azure | Anmol Asrani | Anmol Asrani | | YARN-11446 | [Federation] Add updateSchedulerConfiguration, getSchedulerConfiguration REST APIs for Router. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11442 | Refactor FederationInterceptorREST Code | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18647 | x-ms-client-request-id to have some way that identifies retry of an API. | Minor | fs/azure | Pranav Saxena | Pranav Saxena | | HADOOP-18012 | ABFS: Enable config controlled ETag check for Rename idempotency | Major | fs/azure | Sneha Vijayarajan | Sree Bhattacharyya | | YARN-11377 | [Federation] Support addToClusterNodeLabelsremoveFromClusterNodeLabelsreplaceLabelsOnNode APIs for Federation | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-10846 | Add dispatcher metrics to NM | Major | nodemanager | chaosju | Shilun Fan | | YARN-11239 | Optimize FederationClientInterceptor audit log | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18399 | S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator | Major | fs/s3 | Steve Loughran | Viraj Jasani | | YARN-11378 | [Federation] Support checkForDecommissioningNodesrefreshClusterMaxPriority APIs for Federation | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11379 | [Federation] Support mapAttributesToNodesgetGroupsForUser APIs for Federation | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-9049 | Add application submit data to state store | Major | federation | Bibin Chundatt | Shilun Fan | | YARN-11079 | Make an AbstractParentQueue to store common ParentQueue and ManagedParentQueue functionality | Major | capacity scheduler | Benjamin Teke | Susheel Gupta | | YARN-11340 | [Federation] Improve SQLFederationStateStore DataSource Config | Major | federation | Shilun Fan | Shilun Fan | | YARN-11424 | [Federation] Router AdminCLI Supports" }, { "data": "| Major | federation | Shilun Fan | Shilun Fan | | YARN-6740 | Federation Router (hiding multiple RMs for ApplicationClientProtocol) phase 2 | Major | federation, router | Giovanni Matteo Fumarola | Shilun Fan | | HADOOP-18688 | S3A audit header to include count of items in delete ops | Major | fs/s3 | Steve Loughran | Viraj Jasani | | YARN-11493 | [Federation] ConfiguredRMFailoverProxyProvider Supports Randomly Select an Router. | Major | federation | Shilun Fan | Shilun Fan | | YARN-8898 | Fix FederationInterceptor#allocate to set application priority in allocateResponse | Major | federation | Bibin Chundatt | Shilun Fan | | MAPREDUCE-7419 | Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common | Major | test | Ashutosh Gupta | Ashutosh Gupta | | YARN-11478 | [Federation] SQLFederationStateStore Support Store ApplicationSubmitData | Major | federation | Shilun Fan | Shilun Fan | | YARN-7720 | Race condition between second app attempt and UAM timeout when first attempt node is down | Major | federation | Botong Huang | Shilun Fan | | YARN-11492 | Improve createJerseyClient#setConnectTimeout Code | Major | federation | Shilun Fan | Shilun Fan | | YARN-11500 | Fix typos in hadoop-yarn-server-common#federation | Minor | federation | Shilun Fan | Shilun Fan | | HADOOP-18752 | Change fs.s3a.directory.marker.retention to keep | Major | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11502 | Refactor AMRMProxy#FederationInterceptor#registerApplicationMaster | Major | amrmproxy, federation, nodemanager | Shilun Fan | Shilun Fan | | YARN-11505 | [Federation] Add Steps To Set up a Test Cluster. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11516 | Improve existsApplicationHomeSubCluster/existsReservationHomeSubCluster Log Level | Minor | federation, router | Shilun Fan | Shilun Fan | | YARN-11510 | [Federation] Fix NodeManager#TestFederationInterceptor Flaky Unit Test | Major | federation, nodemanager | Shilun Fan | Shilun Fan | | YARN-11517 | Improve Federation#RouterCLI deregisterSubCluster Code | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18756 | CachingBlockManager to use AtomicBoolean for closed flag | Major | fs/s3 | Steve Loughran | Viraj Jasani | | YARN-11519 | [Federation] Add RouterAuditLog to log4j.properties | Major | router | Shilun Fan | Shilun Fan | | YARN-11090 | [GPG] Support Secure Mode | Major | gpg | tuyu | Shilun Fan | | YARN-11000 | Replace queue resource calculation logic in updateClusterResource | Major | capacity scheduler | Andras Gyori | Andras Gyori | | YARN-11524 | Improve the Policy Description in Federation.md | Major | federation | Shilun Fan | Shilun Fan | | YARN-11509 | The FederationInterceptor#launchUAM Added retry logic. | Minor | amrmproxy | Shilun Fan | Shilun Fan | | YARN-11515 | [Federation] Improve DefaultRequestInterceptor#init Code | Minor | federation, router | Shilun Fan | Shilun Fan | | YARN-11531 | [Federation] Code cleanup for NodeManager#amrmproxy | Minor | federation | Shilun Fan | Shilun Fan | | YARN-11525 | [Federation] Router CLI Supports Save the SubClusterPolicyConfiguration Of" }, { "data": "| Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11533 | CapacityScheduler CapacityConfigType changed in legacy queue allocation mode | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HADOOP-18795 | s3a DelegationToken plugin to expand return type of deploy/binding | Minor | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11521 | Create a test set that runs with both Legacy/Uniform queue calculation | Major | capacityscheduler | Tamas Domok | Tamas Domok | | YARN-11508 | [Minor] Improve UnmanagedAMPoolManager/UnmanagedApplicationManager Code | Minor | federation | Shilun Fan | Shilun Fan | | YARN-11520 | Support capacity vector for AQCv2 dynamic templates | Major | capacityscheduler | Tamas Domok | Benjamin Teke | | YARN-11543 | Fix checkstyle issues after YARN-11520 | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | HADOOP-18183 | s3a audit logs to publish range start/end of GET requests in audit header | Minor | fs/s3 | Steve Loughran | Ankit Saurabh | | YARN-11536 | [Federation] Router CLI Supports Batch Save the SubClusterPolicyConfiguration Of Queues. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11153 | Make proxy server support YARN federation. | Major | yarn | Chenyu Zheng | Chenyu Zheng | | YARN-10201 | Make AMRMProxyPolicy aware of SC load | Major | amrmproxy | Young Chen | Shilun Fan | | HADOOP-18832 | Upgrade aws-java-sdk to 1.12.499+ | Major | fs/s3 | Viraj Jasani | Viraj Jasani | | YARN-11154 | Make router support proxy server. | Major | yarn | Chenyu Zheng | Chenyu Zheng | | YARN-10218 | [GPG] Support HTTPS in GPG | Major | federation | Bilwa S T | Shilun Fan | | HADOOP-18820 | AWS SDK v2: make the v1 bridging support optional | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18853 | AWS SDK V2 - Upgrade SDK to 2.20.28 and restores multipart copy | Major | fs/s3 | Ahmar Suhail | Ahmar Suhail | | YARN-6537 | Running RM tests against the Router | Minor | federation, resourcemanager | Giovanni Matteo Fumarola | Shilun Fan | | YARN-11435 | [Router] FederationStateStoreFacade is not reinitialized with Router conf | Major | federation, router, yarn | Aparajita Choudhary | Shilun Fan | | YARN-11537 | [Federation] Router CLI Supports List SubClusterPolicyConfiguration Of Queues. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11434 | [Router] UGI conf doesnt read user overridden configurations on Router startup | Major | federation, router, yarn | Aparajita Choudhary | Shilun Fan | | YARN-8980 | Mapreduce application container start fail after AM restart. | Major | federation | Bibin Chundatt | Chenyu Zheng | | YARN-6476 | Advanced Federation UI based on YARN UI v2 | Major | yarn, yarn-ui-v2 | Carlo Curino | Shilun Fan | | HADOOP-18863 | AWS SDK V2 - AuditFailureExceptions arent being translated properly | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18818 | Merge aws v2 upgrade feature branch into trunk | Major | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11562 | [Federation] GPG Support Query Policies In Web. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11433 | Routers main() should support generic options | Major | federation, router, yarn | Aparajita Choudhary | Aparajita Choudhary | | HADOOP-18888 | S3A. createS3AsyncClient() always enables multipart | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18906 | Increase default batch size of ZKDTSM token seqnum to reduce overflow speed of zonde dataVersion. | Major | security | Xiaoqiao He | Xiaoqiao He | | YARN-11570 | Add YARNGLOBALPOLICYGENERATORHEAPSIZE to yarn-env for GPG | Minor | federation | Shilun Fan | Shilun Fan | | YARN-11547 | [Federation] Router Supports Remove individual application records from FederationStateStore. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11580 | YARN Router Web supports displaying information for Non-Federation. | Major | federation, router | Shilun Fan | Shilun Fan | | YARN-11579 | Fix Physical Mem Used and Physical VCores Used are not displaying data | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18876 | ABFS: Change default from disk to bytebuffer for" }, { "data": "| Major | build | Anmol Asrani | Anmol Asrani | | HADOOP-18861 | ABFS: Fix failing CPK tests on trunk | Minor | build | Anmol Asrani | Anmol Asrani | | YARN-9048 | Add znode hierarchy in Federation ZK State Store | Major | federation | Bibin Chundatt | Shilun Fan | | YARN-11588 | Fix uncleaned threads in YARN Federation interceptor threadpool | Major | federation, router | Jeffrey Chang | Jeffrey Chang | | HADOOP-18889 | S3A: V2 SDK client does not work with third-party store | Critical | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18857 | AWS v2 SDK: fail meaningfully if legacy v2 signing requested | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18927 | S3ARetryHandler to treat SocketExceptions as connectivity failures | Major | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11571 | [GPG] Add Information About YARN GPG in Federation.md | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18908 | Improve s3a region handling, including determining from endpoint | Major | fs/s3 | Steve Loughran | Ahmar Suhail | | HADOOP-18829 | s3a prefetch LRU cache eviction metric | Major | fs/s3 | Viraj Jasani | Viraj Jasani | | HADOOP-18946 | S3A: testMultiObjectExceptionFilledIn() assertion error | Minor | fs/s3, test | Steve Loughran | Steve Loughran | | HADOOP-18945 | S3A: IAMInstanceCredentialsProvider failing: Failed to load credentials from IMDS | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18939 | NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry() | Critical | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11576 | Improve FederationInterceptorREST AuditLog | Major | federation, router | Shilun Fan | Shilun Fan | | HADOOP-18932 | Upgrade AWS v2 SDK to 2.20.160 and v1 to 1.12.565 | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18948 | S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on rename/delete | Minor | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11593 | [Federation] Improve command line help information. | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18918 | ITestS3GuardTool fails if SSE/DSSE encryption is used | Minor | fs/s3, test | Viraj Jasani | Viraj Jasani | | HADOOP-18850 | Enable dual-layer server-side encryption with AWS KMS keys (DSSE-KMS) | Major | fs/s3, security | Akira Ajisaka | Viraj Jasani | | YARN-11594 | [Federation] Improve Yarn Federation documentation | Major | federation | Shilun Fan | Shilun Fan | | YARN-11609 | Improve the time unit for FederationRMAdminInterceptor#heartbeatExpirationMillis | Minor | federation | WangYuanben | WangYuanben | | YARN-11548 | [Federation] Router Supports Format FederationStateStore. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11011 | Make YARN Router throw Exception to client clearly | Major | federation, router, yarn | Yuan Luo | Shilun Fan | | YARN-11484 | [Federation] Router Supports Yarn Client CLI Cmds. | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18872 | ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations | Major | build | Anmol Asrani | Anuj Modi | | YARN-11483 | [Federation] Router AdminCLI Supports Clean Finish Apps. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11610 | [Federation] Add WeightedHomePolicyManager | Major | federation | Shilun Fan | Shilun Fan | | YARN-11577 | Improve FederationInterceptorREST Method Result | Major | federation | Shilun Fan | Shilun Fan | | YARN-11485 | [Federation] Router Supports Yarn Admin CLI Cmds. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11614 | [Federation] Add Federation PolicyManager Validation Rules | Major | federation | Shilun Fan | Shilun Fan | | YARN-11620 | [Federation] Improve FederationClientInterceptor To Return Partial Results of" }, { "data": "| Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18995 | S3A: Upgrade AWS SDK version to 2.21.33 for Amazon S3 Express One Zone support | Major | fs/s3 | Ahmar Suhail | Ahmar Suhail | | HADOOP-18915 | Tune/extend S3A http connection and thread pool settings | Major | fs/s3 | Ahmar Suhail | Steve Loughran | | HADOOP-18996 | S3A to provide full support for S3 Express One Zone | Major | fs/s3 | Ahmar Suhail | Steve Loughran | | YARN-11561 | [Federation] GPG Supports Format PolicyStateStore. | Major | federation | Shilun Fan | Shilun Fan | | YARN-11613 | [Federation] Router CLI Supports Delete SubClusterPolicyConfiguration Of Queues. | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18997 | S3A: Add option fs.s3a.s3express.create.session to enable/disable CreateSession | Minor | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11619 | [Federation] Router CLI Supports List SubClusters. | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-19008 | S3A: Upgrade AWS SDK to 2.21.41 | Major | fs/s3 | Steve Loughran | Steve Loughran | | YARN-11627 | [GPG] Improve GPGPolicyFacade#getPolicyManager | Major | federation | Shilun Fan | Shilun Fan | | YARN-11629 | [GPG] Improve GPGOverviewBlock Infomation | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-17912 | ABFS: Support for Encryption Context | Major | fs/azure | Sumangala Patki | Pranav Saxena | | YARN-11632 | [Doc] Add allow-partial-result description to Yarn Federation documentation | Major | federation | Shilun Fan | Shilun Fan | | HADOOP-18971 | ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size | Major | fs/azure | Anuj Modi | Anuj Modi | | YARN-11556 | Let Federation.md more standardized | Minor | documentation | WangYuanben | WangYuanben | | YARN-11553 | Change the time unit of scCleanerIntervalMs in Router | Minor | router | WangYuanben | WangYuanben | | HADOOP-19004 | S3A: Support Authentication through HttpSigner API | Major | fs/s3 | Steve Loughran | Harshit Gupta | | HADOOP-18865 | ABFS: Adding 100 continue in userAgent String and dynamically removing it if retry is without the header enabled. | Minor | build | Anmol Asrani | Anmol Asrani | | HADOOP-19027 | S3A: S3AInputStream doesnt recover from HTTP/channel exceptions | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-19033 | S3A: disable checksum validation | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18959 | Use builder for prefetch CachingBlockManager | Major | fs/s3 | Viraj Jasani | Viraj Jasani | | HADOOP-18883 | Expect-100 JDK bug resolution: prevent multiple server calls | Major | fs/azure | Pranav Saxena | Pranav Saxena | | HADOOP-19015 | Increase fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting for connection from pool | Major | fs/s3 | Mukund Thakur | Mukund Thakur | | HADOOP-18975 | AWS SDK v2: extend support for FIPS endpoints | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-19046 | S3A: update AWS sdk versions to 2.23.5 and 1.12.599 | Major | build, fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18830 | S3A: Cut S3 Select | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-18980 | S3A credential provider remapping: make extensible | Minor | fs/s3 | Steve Loughran | Viraj Jasani | | HADOOP-19044 | AWS SDK V2 - Update S3A region logic | Major | fs/s3 | Ahmar Suhail | Viraj Jasani | | HADOOP-19045 | HADOOP-19045. S3A: CreateSession Timeout after 10 seconds | Major | fs/s3 | Steve Loughran | Steve Loughran | | HADOOP-19069 | Use hadoop-thirdparty" }, { "data": "| Major | hadoop-thirdparty | Shilun Fan | Shilun Fan | | HADOOP-19084 | prune dependency exports of hadoop-* modules | Blocker | build | Steve Loughran | Steve Loughran | | HADOOP-19099 | Add Protobuf Compatibility Notes | Major | documentation | Shilun Fan | Shilun Fan | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:|:|:--|:|:|:| | HDFS-15465 | Support WebHDFS accesses to the data stored in secure Datanode through insecure Namenode | Minor | federation, webhdfs | Toshihiko Uchida | Toshihiko Uchida | | HDFS-15854 | Make some parameters configurable for SlowDiskTracker and SlowPeerTracker | Major | block placement | Tao Li | Tao Li | | HDFS-15870 | Remove unused configuration dfs.namenode.stripe.min | Minor | configuration | Tao Li | Tao Li | | HDFS-15808 | Add metrics for FSNamesystem read/write lock hold long time | Major | hdfs | Tao Li | Tao Li | | HDFS-15873 | Add namenode address in logs for block report | Minor | datanode, hdfs | Tao Li | Tao Li | | HDFS-15906 | Close FSImage and FSNamesystem after formatting is complete | Minor | namanode | Tao Li | Tao Li | | HDFS-15892 | Add metric for editPendingQ in FSEditLogAsync | Minor | metrics | Tao Li | Tao Li | | HDFS-15938 | Fix java doc in FSEditLog | Minor | documentation | Tao Li | Tao Li | | HDFS-15951 | Remove unused parameters in NameNodeProxiesClient | Minor | hdfs-client | Tao Li | Tao Li | | HDFS-15975 | Use LongAdder instead of AtomicLong | Major | metrics | Tao Li | Tao Li | | HDFS-15991 | Add location into datanode info for NameNodeMXBean | Minor | metrics, namanode | Tao Li | Tao Li | | HDFS-16078 | Remove unused parameters for DatanodeManager.handleLifeline() | Minor | namanode | Tao Li | Tao Li | | HDFS-16079 | Improve the block state change log | Minor | block placement | Tao Li | Tao Li | | HDFS-16089 | EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor | Minor | erasure-coding, metrics | Tao Li | Tao Li | | HDFS-16104 | Remove unused parameter and fix java doc for DiskBalancerCLI | Minor | diskbalancer, documentation | Tao Li | Tao Li | | HDFS-16106 | Fix flaky unit test TestDFSShell | Minor | test | Tao Li | Tao Li | | HDFS-16110 | Remove unused method reportChecksumFailure in DFSClient | Minor | dfsclient | Tao Li | Tao Li | | HDFS-16131 | Show storage type for failed volumes on namenode web | Minor | namanode, ui | Tao Li | Tao Li | | HDFS-16194 | Simplify the code with DatanodeID#getXferAddrWithHostname | Minor | datanode, metrics, namanode | Tao Li | Tao Li | | HDFS-16280 | Fix typo" }, { "data": "ShortCircuitReplica#isStale | Minor | hdfs-client | Tao Li | Tao Li | | HDFS-16281 | Fix flaky unit tests failed due to timeout | Minor | test | Tao Li | Tao Li | | HDFS-16298 | Improve error msg for BlockMissingException | Minor | hdfs-client | Tao Li | Tao Li | | HDFS-16312 | Fix typo for DataNodeVolumeMetrics and ProfilingFileIoEvents | Minor | datanode, metrics | Tao Li | Tao Li | | HADOOP-18005 | Correct log format for LdapGroupsMapping | Minor | security | Tao Li | Tao Li | | HDFS-16319 | Add metrics doc for ReadLockLongHoldCount and WriteLockLongHoldCount | Minor | metrics | Tao Li | Tao Li | | HDFS-16326 | Simplify the code for DiskBalancer | Minor | diskbalancer | Tao Li | Tao Li | | HDFS-16335 | Fix HDFSCommands.md | Minor | documentation | Tao Li | Tao Li | | HDFS-16339 | Show the threshold when mover threads quota is exceeded | Minor | datanode | Tao Li | Tao Li | | HDFS-16435 | Remove no need TODO comment for ObserverReadProxyProvider | Minor | namanode | Tao Li | Tao Li | | HDFS-16541 | Fix a typo in NameNodeLayoutVersion. | Minor | namanode | ZhiWei Shi | ZhiWei Shi | | HDFS-16587 | Allow configuring Handler number for the JournalNodeRpcServer | Major | journal-node | ZanderXu | ZanderXu | | HDFS-16866 | Fix a typo in Dispatcher. | Minor | balancer | ZhiWei Shi | ZhiWei Shi | | HDFS-17047 | BlockManager#addStoredBlock should log storage id when AddBlockResult is REPLACED | Minor | hdfs | farmmamba | farmmamba | | YARN-9586 | [QA] Need more doc for yarn.federation.policy-manager-params when LoadBasedRouterPolicy is used | Major | federation | Shen Yinjie | Shilun Fan | | YARN-10247 | Application priority queue ACLs are not respected | Blocker | capacity scheduler | Sunil G | Sunil G | | HADOOP-17033 | Update commons-codec from 1.11 to 1.14 | Major | common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-17055 | Remove residual code of Ozone | Major | common, ozone | Wanqiang Ji | Wanqiang Ji | | YARN-10274 | Merge QueueMapping and QueueMappingEntity | Major | yarn | Gergely Pollk | Gergely Pollk | | YARN-10281 | Redundant QueuePath usage in UserGroupMappingPlacementRule and AppNameMappingPlacementRule | Major | capacity scheduler | Gergely Pollk | Gergely Pollk | | YARN-10279 | Avoid unnecessary QueueMappingEntity creations | Minor | resourcemanager | Gergely Pollk | Hudky Mrton Gyula | | YARN-10277 | CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy | Major | capacity scheduler | Gergely Pollk | Szilard Nemeth | | HADOOP-16866 | Upgrade spotbugs to 4.0.6 | Minor | build, command | Tsuyoshi Ozawa | Masatake Iwasaki | | HADOOP-17234 | Add .asf.yaml to allow github and jira integration | Major | build, common | Ayush Saxena | Ayush Saxena | | MAPREDUCE-7298 | Distcp doesnt close the job after the job is completed | Major | distcp | Aasha Medhi | Aasha Medhi | | HADOOP-16990 | Update Mockserver | Major | hdfs-client | Wei-Chiu Chuang | Attila Doroszlai | | HADOOP-17030 | Remove unused joda-time | Major | build, common | Wei-Chiu Chuang | Wei-Chiu Chuang | | YARN-10278 | CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review | Major | capacity scheduler, test | Gergely Pollk | Szilard Nemeth | | YARN-10540 | Node page is broken in YARN UI1 and UI2 including RMWebService api for nodes | Critical | webapp | Sunil G | Jim Brennan | | HADOOP-17445 | Update the year to 2021 | Major | common | Xiaoqiao He | Xiaoqiao He | | HDFS-15731 | Reduce threadCount for unit tests to reduce the memory usage | Major | build, test | Akira Ajisaka | Akira Ajisaka | | HADOOP-17571 | Upgrade com.fasterxml.woodstox:woodstox-core for security reasons | Major | common | Viraj Jasani | Viraj Jasani | | HDFS-15895 | DFSAdmin#printOpenFiles has redundant String#format usage | Minor | dfsadmin | Viraj Jasani | Viraj Jasani | | HDFS-15926 | Removed duplicate dependency of hadoop-annotations | Minor | hdfs | Viraj Jasani | Viraj Jasani | | HADOOP-17614 | Bump netty to the latest 4.1.61 | Blocker | build, common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-17622 | Avoid usage of" }, { "data": "IOUtils#cleanup API | Minor | common | Viraj Jasani | Viraj Jasani | | HADOOP-17624 | Remove any rocksdb exclusion code | Major | common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HADOOP-17625 | Update to Jetty 9.4.39 | Major | build, common | Wei-Chiu Chuang | Wei-Chiu Chuang | | HDFS-15989 | Split TestBalancer into two classes | Major | balancer, test | Viraj Jasani | Viraj Jasani | | HDFS-15850 | Superuser actions should be reported to external enforcers | Major | security | Vivek Ratnavel Subramanian | Vivek Ratnavel Subramanian | | YARN-10746 | RmWebApp add default-node-label-expression to the queue info | Major | resourcemanager, webapp | Gergely Pollk | Gergely Pollk | | YARN-10750 | TestMetricsInvariantChecker.testManyRuns is broken since HADOOP-17524 | Major | test | Gergely Pollk | Gergely Pollk | | YARN-10747 | Bump YARN CSI protobuf version to 3.7.1 | Major | yarn | Siyao Meng | Siyao Meng | | HADOOP-17676 | Restrict imports from org.apache.curator.shaded | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17683 | Update commons-io to 2.8.0 | Major | build, common | Wei-Chiu Chuang | Akira Ajisaka | | HADOOP-17426 | Upgrade to hadoop-thirdparty-1.1.0 | Major | hadoop-thirdparty | Ayush Saxena | Wei-Chiu Chuang | | YARN-10779 | Add option to disable lowercase conversion in GetApplicationsRequestPBImpl and ApplicationSubmissionContextPBImpl | Major | resourcemanager | Peter Bacsko | Peter Bacsko | | HADOOP-17732 | Keep restrict-imports-enforcer-rule for Guava Sets in hadoop-main pom | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17739 | Use hadoop-thirdparty 1.1.1 | Major | hadoop-thirdparty | Wei-Chiu Chuang | Wei-Chiu Chuang | | MAPREDUCE-7350 | Replace Guava Lists usage by Hadoops own Lists in hadoop-mapreduce-project | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17743 | Replace Guava Lists usage by Hadoops own Lists in hadoop-common, hadoop-tools and cloud-storage projects | Major | common | Viraj Jasani | Viraj Jasani | | HDFS-16054 | Replace Guava Lists usage by Hadoops own Lists in hadoop-hdfs-project | Major | hdfs-common | Viraj Jasani | Viraj Jasani | | YARN-10805 | Replace Guava Lists usage by Hadoops own Lists in hadoop-yarn-project | Major | yarn-common | Viraj Jasani | Viraj Jasani | | HADOOP-17753 | Keep restrict-imports-enforcer-rule for Guava Lists in hadoop-main pom | Minor | common | Viraj Jasani | Viraj Jasani | | YARN-10820 | Make GetClusterNodesRequestPBImpl thread safe | Major | client | Prabhu Joseph | SwathiChandrashekar | | HADOOP-17788 | Replace IOUtils#closeQuietly usages | Major | common | Viraj Jasani | Viraj Jasani | | MAPREDUCE-7356 | Remove few duplicate dependencies from mapreduce-clients child poms | Minor | client | Viraj Jasani | Viraj Jasani | | HDFS-16139 | Update BPServiceActor Schedulers nextBlockReportTime atomically | Major | datanode | Viraj Jasani | Viraj Jasani | | HADOOP-17808 | ipc.Client not setting interrupt flag after catching InterruptedException | Minor | ipc | Viraj Jasani | Viraj Jasani | | HADOOP-17835 | Use CuratorCache implementation instead of PathChildrenCache / TreeCache | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17841 | Remove ListenerHandle from Hadoop registry | Minor | common | Viraj Jasani | Viraj Jasani | | HADOOP-17799 | Improve the GitHub pull request template | Major | build, documentation | Akira Ajisaka | Akira Ajisaka | | HADOOP-17834 | Bump aliyun-sdk-oss to" }, { "data": "| Major | build, common | Siyao Meng | Siyao Meng | | HADOOP-17892 | Add Hadoop code formatter in dev-support | Major | common | Viraj Jasani | Viraj Jasani | | MAPREDUCE-7363 | Rename JobClientUnitTest to TestJobClient | Major | mrv2 | Dongjoon Hyun | Dongjoon Hyun | | HADOOP-17950 | Provide replacement for deprecated APIs of commons-io IOUtils | Major | common | Viraj Jasani | Viraj Jasani | | HADOOP-17955 | Bump netty to the latest 4.1.68 | Major | build, common | Takanobu Asanuma | Takanobu Asanuma | | HADOOP-17967 | Keep restrict-imports-enforcer-rule for Guava VisibleForTesting in hadoop-main pom | Major | build, common | Viraj Jasani | Viraj Jasani | | HADOOP-17946 | Update commons-lang to 3.12.0 | Minor | build, common | Sean Busbey | Renukaprasad C | | HADOOP-17968 | Migrate checkstyle module illegalimport to maven enforcer banned-illegal-imports | Major | build | Viraj Jasani | Viraj Jasani | | HDFS-16323 | DatanodeHttpServer doesnt require handler state map while retrieving filter handlers | Minor | datanode | Viraj Jasani | Viraj Jasani | | HADOOP-18014 | CallerContext should not include some characters | Major | ipc | Takanobu Asanuma | Takanobu Asanuma | | HADOOP-13464 | update GSON to 2.7+ | Minor | build | Sean Busbey | Igor Dvorzhak | | HADOOP-18061 | Update the year to 2022 | Major | common | Ayush Saxena | Ayush Saxena | | MAPREDUCE-7371 | DistributedCache alternative APIs should not use DistributedCache APIs internally | Major | mrv1, mrv2 | Viraj Jasani | Viraj Jasani | | YARN-11026 | Make AppPlacementAllocator configurable in AppSchedulingInfo | Major | scheduler | Minni Mittal | Minni Mittal | | HADOOP-18098 | Basic verification for the release candidate vote | Major | build | Viraj Jasani | Viraj Jasani | | HDFS-16481 | Provide support to set Http and Rpc ports in MiniJournalCluster | Major | test | Viraj Jasani | Viraj Jasani | | HADOOP-18131 | Upgrade maven enforcer plugin and relevant dependencies | Major | build | Viraj Jasani | Viraj Jasani | | HDFS-16502 | Reconfigure Block Invalidate limit | Major | block placement | Viraj Jasani | Viraj Jasani | | HDFS-16522 | Set Http and Ipc ports for Datanodes in MiniDFSCluster | Major | tets | Viraj Jasani | Viraj Jasani | | HADOOP-18191 | Log retry count while handling exceptions in RetryInvocationHandler | Minor | common, io | Viraj Jasani | Viraj Jasani | | HADOOP-18196 | Remove replace-guava from replacer plugin | Major | build | Viraj Jasani | Viraj Jasani | | HADOOP-18125 | Utility to identify git commit / Jira fixVersion discrepancies for RC preparation | Major | build | Viraj Jasani | Viraj Jasani | | HDFS-16035 | Remove DummyGroupMapping as it is not longer used anywhere | Minor | httpfs, test | Viraj Jasani | Ashutosh Gupta | | HADOOP-18228 | Update hadoop-vote to use HADOOPRCVERSION dir | Minor | build | Viraj Jasani | Viraj Jasani | | HADOOP-18224 | Upgrade maven compiler plugin to 3.10.1 | Major | build | Viraj Jasani | Viraj Jasani | | HDFS-16618 | syncfilerange error should include more volume and file info | Minor | datanode | Viraj Jasani | Viraj Jasani | | HDFS-16616 | Remove the use if Sets#newHashSet and Sets#newTreeSet | Major | hdfs-common | Samrat Deb | Samrat Deb | | HADOOP-18300 | Update google-gson to" }, { "data": "| Minor | build | Igor Dvorzhak | Igor Dvorzhak | | HADOOP-18397 | Shutdown AWSSecurityTokenService when its resources are no longer in use | Major | fs/s3 | Viraj Jasani | Viraj Jasani | | HDFS-16730 | Update the doc that append to EC files is supported | Major | documentation, erasure-coding | Wei-Chiu Chuang | Ashutosh Gupta | | HDFS-16822 | HostRestrictingAuthorizationFilter should pass through requests if they dont access WebHDFS API | Major | webhdfs | Takanobu Asanuma | Takanobu Asanuma | | HDFS-16833 | NameNode should log internal EC blocks instead of the EC block group when it receives block reports | Major | erasure-coding | Takanobu Asanuma | Takanobu Asanuma | | HADOOP-18575 | Make XML transformer factory more lenient | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18586 | Update the year to 2023 | Major | common | Ayush Saxena | Ayush Saxena | | HADOOP-18587 | upgrade to jettison 1.5.3 to fix CVE-2022-40150 | Major | common | PJ Fanning | PJ Fanning | | HDFS-16886 | Fix documentation for StateStoreRecordOperations#get(Class , Query ) | Trivial | documentation | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-18602 | Remove netty3 dependency | Major | build | Tamas Domok | Tamas Domok | | HDFS-16902 | Add Namenode status to BPServiceActor metrics and improve logging in offerservice | Major | namanode | Viraj Jasani | Viraj Jasani | | MAPREDUCE-7433 | Remove unused mapred/LoggingHttpResponseEncoder.java | Major | mrv1 | Tamas Domok | Tamas Domok | | HADOOP-18524 | Deploy Hadoop trunk version website | Major | documentation | Ayush Saxena | Ayush Saxena | | HDFS-16901 | RBF: Routers should propagate the real user in the UGI via the caller context | Major | rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HDFS-16890 | RBF: Add period state refresh to keep router state near active namenodes | Major | rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HDFS-16917 | Add transfer rate quantile metrics for DataNode reads | Minor | datanode | Ravindra Dingankar | Ravindra Dingankar | | HADOOP-18658 | snakeyaml dependency: upgrade to v2.0 | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18676 | Include jettison as direct dependency of hadoop-common | Major | common | Andras Katona | Andras Katona | | HDFS-16943 | RBF: Implement MySQL based StateStoreDriver | Major | hdfs, rbf | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | HADOOP-18711 | upgrade nimbus jwt jar due to issues in its embedded shaded json-smart code | Major | common | PJ Fanning | PJ Fanning | | HDFS-16998 | RBF: Add ops metrics for getSlowDatanodeReport in RouterClientActivity | Major | rbf | Viraj Jasani | Viraj Jasani | | HADOOP-17612 | Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0 | Major | common | Viraj Jasani | Viraj Jasani | | HDFS-17008 | Fix RBF JDK 11 javadoc warnings | Major | rbf | Viraj Jasani | Viraj Jasani | | YARN-11498 | Exclude Jettison from jersey-json artifact in hadoop-yarn-commons pom.xml | Major | build | Devaspati Krishnatri | Devaspati Krishnatri | | HADOOP-18782 | upgrade to snappy-java 1.1.10.1 due to CVEs | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18773 | Upgrade maven-shade-plugin to 3.4.1 | Minor | build | Rohit Kumar | Rohit Kumar | | HADOOP-18783 | upgrade netty to 4.1.94 due to CVE | Major | common | PJ Fanning | PJ Fanning | | HADOOP-18837 | Upgrade Okio to 3.4.0 due to CVE-2023-3635 | Major | common | Rohit" } ]
{ "category": "App Definition and Development", "file_name": "index.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2). Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. This is the first release to support ARM architectures. Protobuf upgraded to 3.7.1 as protobuf-2.5.0 reached EOL. Java 11 runtime support is completed. Hadoop now switches to use a shaded version of Guava from hadoop-thirdparty which helps to remove Guava version conflicts with downstream applications. For LZ4 and Snappy compression codec, Hadoop now moves to use lz4-java and snappy-java instead of requring the native libraries of these to be installed on the systems running Hadoop. External services or YARN service may need to call into WebHDFS or YARN REST API on behave of the user using web protocols. It would be good to support impersonation mechanism in AuthenticationFilter or similar extensions. Lots of enhancements to the S3A code including Delegation Token support, better handling of 404 caching, S3guard performance, resilience improvements Address issues which surface in the field and tune things which need tuning, add more tests where appropriate. Improve docs, especially troubleshooting. HDFS Router now supports security. Also contains many bug fixes and improvements. Aims to enable storage class memory first in read cache. Although storage class memory has non-volatile characteristics, to keep the same behavior as current read only cache, we dont use its persistent characteristics currently. application catalog system which provides an editorial and search interface for YARN applications. This improves usability of YARN for manage the life cycle of applications. Tencent cloud is top 2 cloud vendors in China market and the object store COS is widely used among Chinas cloud users. This task implements a COSN filesytem to support Tencent cloud COS natively in Hadoop. scheduling of opportunistic container through the central RM (YARN-5220), through distributed scheduling (YARN-2877), as well as the scheduling of containers based on actual node utilization (YARN-1011) and the container promotion/demotion (YARN-5085). The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation." } ]
{ "category": "App Definition and Development", "file_name": "RELEASENOTES.3.3.5.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 3.3.6 is an update to the Hadoop 3.3.x release branch. Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. Starting from this release, Hadoop publishes Software Bill of Materials (SBOM) using CycloneDX Maven plugin. For more information about SBOM, please go to SBOM. HDFS Router-Router Based Federation now supports storing delegation tokens on MySQL, HADOOP-18535 which improves token operation throughput over the original Zookeeper-based implementation. HADOOP-18671 moved a number of HDFS-specific APIs to Hadoop Common to make it possible for certain applications that depend on HDFS semantics to run on other Hadoop compatible file systems. In particular, recoverLease() and isFileClosed() are exposed through LeaseRecoverable interface. While setSafeMode() is exposed through SafeMode interface. The abfs has a critical bug fix HADOOP-18546. ABFS. Disable purging list of in-progress reads in abfs stream close(). All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade or disable prefetching by setting fs.azure.readaheadqueue.depth to 0 Consult the parent JIRA HADOOP-18521 ABFS ReadBufferManager buffer sharing across concurrent HTTP requests for root cause analysis, details on what is affected, and mitigations. HADOOP-18103. High performance vectored read API in Hadoop The PositionedReadable interface has now added an operation for Vectored IO (also known as Scatter/Gather IO): ``` void readVectored(List<? extends FileRange> ranges, IntFunction<ByteBuffer> allocate) ``` All the requested ranges will be retrieved into the supplied byte buffers -possibly asynchronously, possibly in parallel, with results potentially coming in out-of-order. Benchmarking of enhanced Apache ORC and Apache Parquet clients through file:// and s3a:// show significant improvements in query performance. Further Reading: FsDataInputStream. Hadoop Vectored IO: Your Data Just Got Faster! Apachecon 2022 talk. The new Intermediate Manifest Committer uses a manifest file to commit the work of successful task attempts, rather than renaming directories. Job commit is matter of reading all the manifests, creating the destination directories (parallelized) and renaming the files, again in parallel. This is both fast and correct on Azure Storage and Google GCS, and should be used there instead of the classic v1/v2 file output committers. It is also safe to use on HDFS, where it should be faster than the v1 committer. It is however optimized for cloud storage where list and rename operations are significantly slower; the benefits may be less. More details are available in the manifest committer. documentation. HDFS-16400, HDFS-16399, HDFS-16396, HDFS-16397, HDFS-16413, HDFS-16457. A number of Datanode configuration options can be changed without having to restart the datanode. This makes it possible to tune deployment configurations without cluster-wide Datanode Restarts. See DataNode.java for the list of dynamically reconfigurable attributes. A lot of dependencies have been upgraded to address recent CVEs. Many of the CVEs were not actually exploitable through the Hadoop so much of this work is just due" }, { "data": "However applications which have all the library is on a class path may be vulnerable, and the ugprades should also reduce the number of false positives security scanners report. We have not been able to upgrade every single dependency to the latest version there is. Some of those changes are fundamentally incompatible. If you have concerns about the state of a specific library, consult the Apache JIRA issue tracker to see if an issue has been filed, discussions have taken place about the library in question, and whether or not there is already a fix in the pipeline. Please dont file new JIRAs about dependency-X.Y.Z having a CVE without searching for any existing issue first As an open-source project, contributions in this area are always welcome, especially in testing the active branches, testing applications downstream of those branches and of whether updated dependencies trigger regressions. Hadoop HDFS is a distributed filesystem allowing remote callers to read and write data. Hadoop YARN is a distributed job submission/execution engine allowing remote callers to submit arbitrary work into the cluster. Unless a Hadoop cluster is deployed with caller authentication with Kerberos, anyone with network access to the servers has unrestricted access to the data and the ability to run whatever code they want in the system. In production, there are generally three deployment patterns which can, with care, keep data and computing resources private. 1. Physical cluster: configure Hadoop security, usually bonded to the enterprise Kerberos/Active Directory systems. Good. 1. Cloud: transient or persistent single or multiple user/tenant cluster with private VLAN and security. Good. Consider Apache Knox for managing remote access to the cluster. 1. Cloud: transient single user/tenant cluster with private VLAN and no security at all. Requires careful network configuration as this is the sole means of securing the cluster.. Consider Apache Knox for managing remote access to the cluster. If you deploy a Hadoop cluster in-cloud without security, and without configuring a VLAN to restrict access to trusted users, you are implicitly sharing your data and computing resources with anyone with network access If you do deploy an insecure cluster this way then port scanners will inevitably find it and submit crypto-mining jobs. If this happens to you, please do not report this as a CVE or security issue: it is utterly predictable. Secure your cluster if you want to remain exclusively your cluster. Finally, if you are using Hadoop as a service deployed/managed by someone else, do determine what security their products offer and make sure it meets your requirements. The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation. Before deploying Hadoop in production, read Hadoop in Secure Mode, and follow its instructions to secure your cluster." } ]
{ "category": "App Definition and Development", "file_name": "RELEASENOTES.3.3.4.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 3.3.6 is an update to the Hadoop 3.3.x release branch. Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. Starting from this release, Hadoop publishes Software Bill of Materials (SBOM) using CycloneDX Maven plugin. For more information about SBOM, please go to SBOM. HDFS Router-Router Based Federation now supports storing delegation tokens on MySQL, HADOOP-18535 which improves token operation throughput over the original Zookeeper-based implementation. HADOOP-18671 moved a number of HDFS-specific APIs to Hadoop Common to make it possible for certain applications that depend on HDFS semantics to run on other Hadoop compatible file systems. In particular, recoverLease() and isFileClosed() are exposed through LeaseRecoverable interface. While setSafeMode() is exposed through SafeMode interface. The abfs has a critical bug fix HADOOP-18546. ABFS. Disable purging list of in-progress reads in abfs stream close(). All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade or disable prefetching by setting fs.azure.readaheadqueue.depth to 0 Consult the parent JIRA HADOOP-18521 ABFS ReadBufferManager buffer sharing across concurrent HTTP requests for root cause analysis, details on what is affected, and mitigations. HADOOP-18103. High performance vectored read API in Hadoop The PositionedReadable interface has now added an operation for Vectored IO (also known as Scatter/Gather IO): ``` void readVectored(List<? extends FileRange> ranges, IntFunction<ByteBuffer> allocate) ``` All the requested ranges will be retrieved into the supplied byte buffers -possibly asynchronously, possibly in parallel, with results potentially coming in out-of-order. Benchmarking of enhanced Apache ORC and Apache Parquet clients through file:// and s3a:// show significant improvements in query performance. Further Reading: FsDataInputStream. Hadoop Vectored IO: Your Data Just Got Faster! Apachecon 2022 talk. The new Intermediate Manifest Committer uses a manifest file to commit the work of successful task attempts, rather than renaming directories. Job commit is matter of reading all the manifests, creating the destination directories (parallelized) and renaming the files, again in parallel. This is both fast and correct on Azure Storage and Google GCS, and should be used there instead of the classic v1/v2 file output committers. It is also safe to use on HDFS, where it should be faster than the v1 committer. It is however optimized for cloud storage where list and rename operations are significantly slower; the benefits may be less. More details are available in the manifest committer. documentation. HDFS-16400, HDFS-16399, HDFS-16396, HDFS-16397, HDFS-16413, HDFS-16457. A number of Datanode configuration options can be changed without having to restart the datanode. This makes it possible to tune deployment configurations without cluster-wide Datanode Restarts. See DataNode.java for the list of dynamically reconfigurable attributes. A lot of dependencies have been upgraded to address recent CVEs. Many of the CVEs were not actually exploitable through the Hadoop so much of this work is just due" }, { "data": "However applications which have all the library is on a class path may be vulnerable, and the ugprades should also reduce the number of false positives security scanners report. We have not been able to upgrade every single dependency to the latest version there is. Some of those changes are fundamentally incompatible. If you have concerns about the state of a specific library, consult the Apache JIRA issue tracker to see if an issue has been filed, discussions have taken place about the library in question, and whether or not there is already a fix in the pipeline. Please dont file new JIRAs about dependency-X.Y.Z having a CVE without searching for any existing issue first As an open-source project, contributions in this area are always welcome, especially in testing the active branches, testing applications downstream of those branches and of whether updated dependencies trigger regressions. Hadoop HDFS is a distributed filesystem allowing remote callers to read and write data. Hadoop YARN is a distributed job submission/execution engine allowing remote callers to submit arbitrary work into the cluster. Unless a Hadoop cluster is deployed with caller authentication with Kerberos, anyone with network access to the servers has unrestricted access to the data and the ability to run whatever code they want in the system. In production, there are generally three deployment patterns which can, with care, keep data and computing resources private. 1. Physical cluster: configure Hadoop security, usually bonded to the enterprise Kerberos/Active Directory systems. Good. 1. Cloud: transient or persistent single or multiple user/tenant cluster with private VLAN and security. Good. Consider Apache Knox for managing remote access to the cluster. 1. Cloud: transient single user/tenant cluster with private VLAN and no security at all. Requires careful network configuration as this is the sole means of securing the cluster.. Consider Apache Knox for managing remote access to the cluster. If you deploy a Hadoop cluster in-cloud without security, and without configuring a VLAN to restrict access to trusted users, you are implicitly sharing your data and computing resources with anyone with network access If you do deploy an insecure cluster this way then port scanners will inevitably find it and submit crypto-mining jobs. If this happens to you, please do not report this as a CVE or security issue: it is utterly predictable. Secure your cluster if you want to remain exclusively your cluster. Finally, if you are using Hadoop as a service deployed/managed by someone else, do determine what security their products offer and make sure it meets your requirements. The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation. Before deploying Hadoop in production, read Hadoop in Secure Mode, and follow its instructions to secure your cluster." } ]
{ "category": "App Definition and Development", "file_name": "RELEASENOTES.3.3.6.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 3.3.6 is an update to the Hadoop 3.3.x release branch. Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. Starting from this release, Hadoop publishes Software Bill of Materials (SBOM) using CycloneDX Maven plugin. For more information about SBOM, please go to SBOM. HDFS Router-Router Based Federation now supports storing delegation tokens on MySQL, HADOOP-18535 which improves token operation throughput over the original Zookeeper-based implementation. HADOOP-18671 moved a number of HDFS-specific APIs to Hadoop Common to make it possible for certain applications that depend on HDFS semantics to run on other Hadoop compatible file systems. In particular, recoverLease() and isFileClosed() are exposed through LeaseRecoverable interface. While setSafeMode() is exposed through SafeMode interface. The abfs has a critical bug fix HADOOP-18546. ABFS. Disable purging list of in-progress reads in abfs stream close(). All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade or disable prefetching by setting fs.azure.readaheadqueue.depth to 0 Consult the parent JIRA HADOOP-18521 ABFS ReadBufferManager buffer sharing across concurrent HTTP requests for root cause analysis, details on what is affected, and mitigations. HADOOP-18103. High performance vectored read API in Hadoop The PositionedReadable interface has now added an operation for Vectored IO (also known as Scatter/Gather IO): ``` void readVectored(List<? extends FileRange> ranges, IntFunction<ByteBuffer> allocate) ``` All the requested ranges will be retrieved into the supplied byte buffers -possibly asynchronously, possibly in parallel, with results potentially coming in out-of-order. Benchmarking of enhanced Apache ORC and Apache Parquet clients through file:// and s3a:// show significant improvements in query performance. Further Reading: FsDataInputStream. Hadoop Vectored IO: Your Data Just Got Faster! Apachecon 2022 talk. The new Intermediate Manifest Committer uses a manifest file to commit the work of successful task attempts, rather than renaming directories. Job commit is matter of reading all the manifests, creating the destination directories (parallelized) and renaming the files, again in parallel. This is both fast and correct on Azure Storage and Google GCS, and should be used there instead of the classic v1/v2 file output committers. It is also safe to use on HDFS, where it should be faster than the v1 committer. It is however optimized for cloud storage where list and rename operations are significantly slower; the benefits may be less. More details are available in the manifest committer. documentation. HDFS-16400, HDFS-16399, HDFS-16396, HDFS-16397, HDFS-16413, HDFS-16457. A number of Datanode configuration options can be changed without having to restart the datanode. This makes it possible to tune deployment configurations without cluster-wide Datanode Restarts. See DataNode.java for the list of dynamically reconfigurable attributes. A lot of dependencies have been upgraded to address recent CVEs. Many of the CVEs were not actually exploitable through the Hadoop so much of this work is just due" }, { "data": "However applications which have all the library is on a class path may be vulnerable, and the ugprades should also reduce the number of false positives security scanners report. We have not been able to upgrade every single dependency to the latest version there is. Some of those changes are fundamentally incompatible. If you have concerns about the state of a specific library, consult the Apache JIRA issue tracker to see if an issue has been filed, discussions have taken place about the library in question, and whether or not there is already a fix in the pipeline. Please dont file new JIRAs about dependency-X.Y.Z having a CVE without searching for any existing issue first As an open-source project, contributions in this area are always welcome, especially in testing the active branches, testing applications downstream of those branches and of whether updated dependencies trigger regressions. Hadoop HDFS is a distributed filesystem allowing remote callers to read and write data. Hadoop YARN is a distributed job submission/execution engine allowing remote callers to submit arbitrary work into the cluster. Unless a Hadoop cluster is deployed with caller authentication with Kerberos, anyone with network access to the servers has unrestricted access to the data and the ability to run whatever code they want in the system. In production, there are generally three deployment patterns which can, with care, keep data and computing resources private. 1. Physical cluster: configure Hadoop security, usually bonded to the enterprise Kerberos/Active Directory systems. Good. 1. Cloud: transient or persistent single or multiple user/tenant cluster with private VLAN and security. Good. Consider Apache Knox for managing remote access to the cluster. 1. Cloud: transient single user/tenant cluster with private VLAN and no security at all. Requires careful network configuration as this is the sole means of securing the cluster.. Consider Apache Knox for managing remote access to the cluster. If you deploy a Hadoop cluster in-cloud without security, and without configuring a VLAN to restrict access to trusted users, you are implicitly sharing your data and computing resources with anyone with network access If you do deploy an insecure cluster this way then port scanners will inevitably find it and submit crypto-mining jobs. If this happens to you, please do not report this as a CVE or security issue: it is utterly predictable. Secure your cluster if you want to remain exclusively your cluster. Finally, if you are using Hadoop as a service deployed/managed by someone else, do determine what security their products offer and make sure it meets your requirements. The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation. Before deploying Hadoop in production, read Hadoop in Secure Mode, and follow its instructions to secure your cluster." } ]
{ "category": "App Definition and Development", "file_name": "RELEASENOTES.3.4.0.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "Apache Hadoop 2.10.2 is a point release in the 2.10.x release line, building upon the previous stable release 2.10.1. Users are encouraged to read release notes for overview of the major changes and change log for list of all changes. The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the Single Node Setup which shows you how to set up a single-node Hadoop installation. Then move on to the Cluster Setup to learn how to set up a multi-node Hadoop installation." } ]
{ "category": "App Definition and Development", "file_name": "SingleCluster.html.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS). Important: all production Hadoop clusters use Kerberos to authenticate callers and secure access to HDFS data as well as restriction access to computation services (YARN etc.). These instructions do not cover integration with any Kerberos services, -everyone bringing up a production cluster should include connecting to their organisations Kerberos infrastructure as a key part of the deployment. See Security for details on how to secure a cluster. Required software for Linux include: Java must be installed. Recommended Java versions are described at HadoopJavaVersions. ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons if the optional start and stop scripts are to be used. Additionally, it is recommmended that pdsh also be installed for better ssh resource management. If your cluster doesnt have the requisite software you will need to install it. For example on Ubuntu Linux: ``` $ sudo apt-get install ssh $ sudo apt-get install pdsh ``` To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors. Unpack the downloaded Hadoop distribution. In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows: ``` export JAVA_HOME=/usr/java/latest ``` Try the following command: ``` $ bin/hadoop ``` This will display the usage documentation for the hadoop script. Now you are ready to start your Hadoop cluster in one of the three supported modes: By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging. The following example copies the unpacked conf directory to use as input and then finds and displays every match of the given regular expression. Output is written to the given output directory. ``` $ mkdir input $ cp etc/hadoop/*.xml input $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep input output 'dfs[a-z.]+' $ cat output/* ``` Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process. Use the following: etc/hadoop/core-site.xml: ``` <configuration> <property>" }, { "data": "<value>hdfs://localhost:9000</value> </property> </configuration> ``` etc/hadoop/hdfs-site.xml: ``` <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> ``` Now check that you can ssh to the localhost without a passphrase: ``` $ ssh localhost ``` If you cannot ssh to localhost without a passphrase, execute the following commands: ``` $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa $ cat ~/.ssh/idrsa.pub >> ~/.ssh/authorizedkeys $ chmod 0600 ~/.ssh/authorized_keys ``` The following instructions are to run a MapReduce job locally. If you want to execute a job on YARN, see YARN on Single Node. Format the filesystem: ``` $ bin/hdfs namenode -format ``` Start NameNode daemon and DataNode daemon: ``` $ sbin/start-dfs.sh ``` The hadoop daemon log output is written to the $HADOOPLOGDIR directory (defaults to $HADOOP_HOME/logs). Browse the web interface for the NameNode; by default it is available at: Make the HDFS directories required to execute MapReduce jobs: ``` $ bin/hdfs dfs -mkdir -p /user/<username> ``` Copy the input files into the distributed filesystem: ``` $ bin/hdfs dfs -mkdir input $ bin/hdfs dfs -put etc/hadoop/*.xml input ``` Run some of the examples provided: ``` $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep input output 'dfs[a-z.]+' ``` Examine the output files: Copy the output files from the distributed filesystem to the local filesystem and examine them: ``` $ bin/hdfs dfs -get output output $ cat output/* ``` or View the output files on the distributed filesystem: ``` $ bin/hdfs dfs -cat output/* ``` When youre done, stop the daemons with: ``` $ sbin/stop-dfs.sh ``` You can run a MapReduce job on YARN in a pseudo-distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition. The following instructions assume that 1. ~ 4. steps of the above instructions are already executed. Configure parameters as follows: etc/hadoop/mapred-site.xml: ``` <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOPMAPREDHOME/share/hadoop/mapreduce/:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/</value> </property> </configuration> ``` etc/hadoop/yarn-site.xml: ``` <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVAHOME,HADOOPCOMMONHOME,HADOOPHDFSHOME,HADOOPCONFDIR,CLASSPATHPREPENDDISTCACHE,HADOOPYARNHOME,HADOOPHOME,PATH,LANG,TZ,HADOOPMAPREDHOME</value> </property> </configuration> ``` Start ResourceManager daemon and NodeManager daemon: ``` $ sbin/start-yarn.sh ``` Browse the web interface for the ResourceManager; by default it is available at: Run a MapReduce job. When youre done, stop the daemons with: ``` $ sbin/stop-yarn.sh ``` For information on setting up fully-distributed, non-trivial clusters see Cluster Setup." } ]
{ "category": "App Definition and Development", "file_name": "index.md", "project_name": "Apache Ignite", "subcategory": "Database" }
[ { "data": "Ignite Summit 2023 Watch on demand Register now! Apache Ignite is a distributed database for high-performance computing with in-memory speed. The technical documentation introduces you to the key capabilities, shows how to use certain features, or how to approach cluster optimizations and issues troubleshooting. If you are new to Ignite, then start with our quick start guides, and build the first application in a matter of 5-10 minutes. Otherwise, select the topic of your interest and have your problems solved, and questions answered. Good luck with your Ignite journey! Build the first application in a matter of minutes. Java SQL REST API C#/.NET C++ Python Node.JS PHP API reference for various programming languages. Latest Stable Version JavaDoc C#/.NET C++ Older Versions With the top-level navigation menu, change an Ignite version and select a version-specific API from the APIs drop-down list. Or, go to the downloads page for a full archive of the versions. The Apache Ignite github repository contains a number of runnable examples that illustrate various Ignite functionality. Java C#/.NET C++ Python Node.JS PHP Ignite is available for Java, .NET/C#, C++ and other programming languages. The Java version provides the richest API. The .NET/C#, C++, Python, etc. languages may have limited functionality. To make the Ignite documentation intuitive for all application developers, we adhere to the following conventions: The information provided in this documentation applies to all programming languages unless noted otherwise. Code samples for different languages are provided in different tabs as shown below. For example, if you are a .NET developer, click on the .NET tab in the code examples to see .NET specific code. ``` This is a place where an example of XML configuration is provided. Click on other tabs to view an equivalent programmatic configuration.``` ``` Code sample in Java. Click on other tabs to view the same example in other languages.``` ``` Code sample in .NET. Click on other tabs to view the same example in other languages.``` ``` Code sample in C++. Click on other tabs to view the same example in other languages.``` If there is no tab for a specific language, this most likely means that the functionality is not supported in that language." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "BigchainDB", "subcategory": "Database" }
[ { "data": "Meet BigchainDB. The blockchain database. It has some database characteristics and some blockchain properties, including decentralization, immutability and native support for assets. At a high level, one can communicate with a BigchainDB network (set of nodes) using the BigchainDB HTTP API, or a wrapper for that API, such as the BigchainDB Python Driver. Each BigchainDB node runs BigchainDB Server and various other software. The terminology page explains some of those terms in more detail. Copyright 2022, BigchainDB Contributors Revision 3c89d306." } ]
{ "category": "App Definition and Development", "file_name": "clickhouse-client-local.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "It was designed to be fast. Query execution performance has always been a top priority during the development process, but other important characteristics like user-friendliness, scalability, and security were also considered so ClickHouse could become a real production system. \"Building for Fast\" talk from ClickHouse Meetup Amsterdam, June 2022. \"Secrets of ClickHouse Performance Optimizations\" talk from Big Data Technology Conference, December 2019, offers a more technical take on the same topic. ClickHouse was initially built as a prototype to do just a single task well: to filter and aggregate data as fast as possible. Thats what needs to be done to build a typical analytical report, and thats what a typical GROUP BY query does. The ClickHouse team has made several high-level decisions that, when combined, made achieving this task possible: Column-oriented storage: Source data often contain hundreds or even thousands of columns, while a report can use just a few of them. The system needs to avoid reading unnecessary columns to avoid expensive disk read operations. Indexes: Memory resident ClickHouse data structures allow the reading of only the necessary columns, and only the necessary row ranges of those columns. Data compression: Storing different values of the same column together often leads to better compression ratios (compared to row-oriented systems) because in real data a column often has the same, or not so many different, values for neighboring rows. In addition to general-purpose compression, ClickHouse supports specialized codecs that can make data even more compact. Vectorized query execution: ClickHouse not only stores data in columns but also processes data in columns. This leads to better CPU cache utilization and allows for SIMD CPU instructions usage. Scalability: ClickHouse can leverage all available CPU cores and disks to execute even a single query. Not only on a single server but all CPU cores and disks of a cluster as well. But many other database management systems use similar techniques. What really makes ClickHouse stand out is attention to low-level details. Most programming languages provide implementations for most common algorithms and data structures, but they tend to be too generic to be effective. Every task can be considered as a landscape with various characteristics, instead of just throwing in random implementation. For example, if you need a hash table, here are some key questions to consider: Hash table is a key data structure for GROUP BY implementation and ClickHouse automatically chooses one of 30+ variations for each specific query. The same goes for algorithms, for example, in sorting you might consider: Algorithms that rely on characteristics of data they are working with can often do better than their generic counterparts. If it is not really known in advance, the system can try various implementations and choose the one that works best in runtime. For example, see an article on how LZ4 decompression is implemented in ClickHouse. Last but not least, the ClickHouse team always monitors the Internet on people claiming that they came up with the best implementation, algorithm, or data structure to do something and tries it out. Those claims mostly appear to be false, but from time to time youll indeed find a gem." } ]
{ "category": "App Definition and Development", "file_name": "install#self-managed-install.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "ClickHouse integrations are organized by their support level: Each integration is further categorized into Language client, Data ingestion, Data visualization and SQL client categories. We are actively compiling this list of ClickHouse integrations below, so it's not exhaustive. Feel free to contribute any relevant ClickHouse integration to the list. | Name | Logo | Category | Description | Resources | |:|:|:-|:-|:| | Amazon Kinesis | nan | Data ingestion | Integration with Amazon Kinesis. | Documentation | | Amazon MSK | nan | Data ingestion | Integration with Amazon Managed Streaming for Apache Kafka (MSK). | Documentation | | Amazon S3 | nan | Data ingestion | Import from, export to, and transform S3 data in flight with ClickHouse built-in S3 functions. | Documentation | | Cassandra | nan | Data integration | Allows ClickHouse to use Cassandra as a dictionary source. | Documentation | | CHDB | nan | SQL Client | An embedded OLAP SQL Engine | GitHub,Documentation | | ClickHouse Client | nan | SQL client | ClickHouse Client is the native command-line client for ClickHouse. | Documentation | | DeltaLake | nan | Data integration | provides a read-only integration with existing Delta Lake tables in Amazon S3. | Documentation | | EmbeddedRocksDB | nan | Data integration | Allows integrating ClickHouse with rocksdb. | Documentation | | Fivetran | nan | Data ingestion | ClickHouse Cloud destination for the Fivetran data movement platform. | Documentation | | Google Cloud Storage | nan | Data ingestion | Import from, export to, and transform GCS data in flight with ClickHouse built-in S3 functions. | Documentation | | Go | .st0{fill:#00acd7} | Language client | The Go client uses the native interface for a performant, low-overhead means of connecting to ClickHouse. | Documentation | | HDFS | nan | Data integration | Provides integration with the Apache Hadoop ecosystem by allowing to manage data on HDFS via ClickHouse. | Documentation | | Hive | nan | Data integration | The Hive engine allows you to perform SELECT quries on HDFS Hive table. | Documentation | | HouseWatch | nan | Data management | Open source tool for monitoring and managing ClickHouse clusters. | GitHub | | Hudi | nan | Data integration | provides a read-only integration with existing Apache Hudi tables in Amazon S3. | Documentation | | Iceberg | nan | Data integration | Provides a read-only integration with existing Apache Iceberg tables in Amazon S3. | Documentation | | JDBC | .cls-1{fill:#1c1733} | Data integration | Allows ClickHouse to connect to external databases via JDBC table engine. | Documentation | | Java, JDBC | nan | Language client | The Java client and JDBC driver. | Documentation | | Kafka | nan | Data ingestion | Integration with Apache Kafka, the open-source distributed event streaming platform. | Documentation | | Looker Studio | nan | Data visualization | Looker Studio is a free tool that turns your data into informative, easy to read, easy to share, and fully customizable dashboards and reports. | Documentation | | Looker" }, { "data": ".cls-5{fill:#5f6368} | Data visualization | Looker is an enterprise platform for BI, data applications, and embedded analytics that helps you explore and share insights in real time. | Documentation | | Metabase | nan | Data visualization | Metabase is an easy-to-use, open source UI tool for asking questions about your data. | Documentation | | MinIO | nan | Data ingestion | MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with the Amazon S3 cloud storage service | Documentation | | MongoDB | nan | Data integration | MongoDB engine is read-only table engine which allows to read data (SELECT queries) from remote MongoDB collection. | Documentation | | MySQL | nan | Data integration | The MySQL engine allows you to perform SELECT and INSERT queries on data that is stored on a remote MySQL server. | Documentation | | NATS | nan | Data integration | Allows integrating ClickHouse with NATS. | Documentation | | Node.JS | nan | Language client | The official JS client for connecting to ClickHouse. | Documentation | | ODBC | nan | Data integration | Allows ClickHouse to connect to external databases via ODBC table engine. | Documentation | | PostgreSQL | nan | Data integration | Allows to perform SELECT and INSERT queries on data that is stored on a remote PostgreSQL server. | Documentation | | PowerBI | nan | Data visualization | Microsoft Power BI is an interactive data visualization software product developed by Microsoft with a primary focus on business intelligence. | Documentation | | Python | nan | Language client | A suite of Python packages for connecting Python to ClickHouse. | Documentation | | QuickSight | nan | Data visualization | Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. | Documentation | | RabbitMQ | nan | Data integration | Allows ClickHouse to connect RabbitMQ. | Documentation | | Redis | nan | Data integration | Allows ClickHouse to use Redis as a dictionary source. | Documentation | | SQLite | nan | Data integration | Allows to import and export data to SQLite and supports queries to SQLite tables directly from ClickHouse. | Documentation | | Superset | nan | Data visualization | Explore and visualize your ClickHouse data with Apache Superset. | Documentation | | Tableau Online | nan | Data visualization | Tableau Online streamlines the power of data to make people faster and more confident decision makers from anywhere | Documentation | | dbt | dbt | Data ingestion | Use dbt (data build tool) to transform data in ClickHouse by simply writing select statements. dbt puts the T in ELT. | Documentation | | Name | Logo | Category | Description | Resources | |:-|:--|:--|:--|:--| | Airbyte | nan | Data ingestion | Use Airbyte, to create ELT data pipelines with more than 140 connectors to load and sync your data into ClickHouse. | Documentation | | AccelData | nan | Data management | ADOC allows users to monitor and ensure the dependability and integrity of their visualized data, facilitating rea-time data processing and analytics. | Documentation | | Atlas | nan | Schema management | Manage your ClickHouse schema as" }, { "data": "| Documentation | | Azure Event Hubs | nan | Data Ingestion | A data streaming platform that supports Apache Kafka's native protocol | Website | | BlinkOps | nan | Security automation | Create automations to manage data and user permissions. | Documentation | | Calyptia (Fluent Bit) | nan | Data ingestion | CNCF graduated open-source project for the collection, processing, and delivery of logs, metrics, and traces | Blog | | CloudCanal | nan | Data integration | A data synchronization and migration tool. | Website | | CloudQuery | .st4{fill:#404041} | Data ingestion | Open source high-performance ELT framework. | Documentation | | Confluent | nan | Data ingestion | Integration with Apache Kafka on Confluent platform. | Documentation | | Cube.js | nan | Data visualization | Cube is the Semantic Layer for building data apps. | Website | | DBeaver | nan | SQL client | Free multi-platform database administration tool. Connects to Clickhouse through JDBC driver. | Documentation | | DataGrip | nan | SQL client | DataGrip is a powerful database IDE with dedicated support for ClickHouse. | Documentation | | Dataddo | nan | Data integration | Data integration platform | Website | | DbVisualizer | nan | SQL client | DbVisualizer is a database tool with extended support for ClickHouse. | Documentation | | Decodable | nan | Data ingestion | Powerful Stream Processing Built On Apache Flink | Website | | Deepnote | nan | Data visualization | Deepnote is a collaborative Jupyter-compatible data notebook built for teams to discover and share insights. | Documentation | | Draxlr | nan | Data visualization | Draxlr is a Business intelligence tool with data visualization and analytics. | Documentation | | EMQX | 320 | Data ingestion | EMQX is an open source MQTT broker with a high-performance real-time message processing engine, powering event streaming for IoT devices at massive scale. | Documentation | | Explo | nan | Data visualization | Explo is a customer-facing analytics tool for any platform | Documentation | | Gigasheet | nan | Data visualization | A cloud big data analytics spreadsheet that enables business users to instantly analyze and explore ClickHouse data. | Website | | Grafana | .st0{fill:#c59a6e}.st1{fill:#3cb44b}.st2{fill:#67308f}.st3{fill:#d1d1d1}.st4{fill:#ef5b28}.st5{stroke:#000;stroke-width:16;stroke-miterlimit:10}.st6{fill:url(#Triangle-31)}.st7{fill:#333}.st8{fill:#cf2129}.st9{fill:#cc3832}.st10{fill:#ba443d}.st11{fill:#bac556}.st12{fill:#f8df4b}.st13{fill:#f5cf47}.st14{fill:#a9b75d}.st15{fill:#ddbe5b}.st16{fill:#6fbecb}.st17{fill:#79c0bf}.st18{fill:#77c3d2}.st19{fill:#5cb8bb}.st20{fill:#529fb5}.st21{fill:#ba3d36}.st22{fill:#9d3634}.st23{fill:#d33e35}.st24{fill:#869b63}.st25{fill:#bec54f}.st26{fill:#9ead59}.st27{fill:none}.st28{fill:#43ae2a}.st29{fill:#009ddd}.st30{fill:#00a297}.st31{fill:#0072cc}.st32{fill:#82bb00}.st33{fill:#10069d}.st34{fill:#333f48}.st35{clip-path:url(#SVGID2)}.st36{clip-path:url(#SVGID4);fill:#414042} | Data visualization | With Grafana you can create, explore and share all of your data through dashboards. | Documentation | | Great Expectations | nan | Data management | An open-source data management tool, with a paid cloud offering. | Website | | GrowthBook | nan | Data visualization | Warehouse native experimentation platform (feature flagging and A/B testing). | Documentation | | HEX | nan | Data visualization | Hex is a modern, collaborative platform with notebooks, data apps, SQL, Python, no-code, R, and so much more. | Documentation | | Hashboard | nan | Data visualization | Hashboard is a business intelligence platform that enables self-service data exploration and metric" }, { "data": "| Documentation | | HighTouch | nan | Data integration | Sync your data directly from your warehouse to 140+ destinations | Website | | Holistics | nan | Data visualization | Business Intelligence for ClickHouse database | Website | | IBM Instana | nan | Data management | Instana can auto-discover and monitor ClickHouse server processes | Documentation | | Jitsu | nan | Data analytics | An open-source event collection platform. | Documentation | | LangChain | | SDK | LangChain is a framework for developing applications powered by language models | Documentation | | Mage | nan | Data Ingestion | Open-source data pipeline tool for transforming and integrating data | Documentation | | Metaplane | nan | Data management | Data observability for every data team | Website | | MindsDB | nan | AI/ML | The platform for customizing AI from enterprise data | Website | | Mitzu | nan | Data visualization | Mitzu is a no-code warehouse-native product analytics application. Find funnel, retention, user segmentation insights without copying your data. | Documentation | | Mode Analytics | nan | Data visualization | Business Intelligence built around data teams | Website | | Omni | .st0{display:none}.st1{fill:#aa3f6b}.st1,.st2{display:inline}.st3{fill:#fff}.st4,.st5{fill:#ff5789}.st5{display:inline}.st6{fill:#aa3f6b}.st7{display:inline;fill:#e8bf43}.st8{fill:#00a1ff}.st10,.st11,.st9{display:inline;fill:#a3acbb}.st10,.st11{fill:#3f4755}.st11{fill:#00a1ff}.st13{fill:#3f4755} | Data visualization | Business intelligence that speaks your language. Explore, visualize, and model data your way with Omni. From spreadsheets to SQLin a single platform. | Website | | Openblocks | nan | SQL client | Openblocks is a low code platform for building UIs | Documentation | | OpsRamp (HP) | nan | Data management | Provides observability metrics for ClickHouse | Documentation | | Popsink | nan | Data integration | Build real-time Change Data Capture (CDC) pipelines to ClickHouse. | Documentation | | PeerDB | nan | Data integration | PeerDB provides a fast, simple and cost-effective way to replicate data from Postgres to Data Warehouses, Queues and Storage. | Documentation | | Prequel | nan | Data sharing | Connect your ClickHouse instance to Prequel to share data to or sync data from your users and partners. | Documentation | | Redash | nan | Data visualization | Connect and query your data sources, build dashboards to visualize data and share | Website | | Redpanda | nan | Data ingestion | Redpanda is the streaming data platform for developers. Its API-compatible with Apache Kafka, but 10x faster, much easier to use, and more cost effective | Blog | | Restack Data Hub | nan | Data governance | Users can achieve more comprehensive data governance and observability framework with Restack Data Hub. | Documentation | | Restack OpenMetadata | nan | Data quality | Restack OpenMetadata supports metadata extraction, query usage tracking, data profiling, and data quality checks. | Documentation | | Retool | nan | No code | Create your application with drag-and-drop interface. | Documentation | | Rill | nan | Data visualization | Rill is an Operational BI tool purpose-built for slicing & dicing data with OLAP engines. | Documentation | | RisingWave | .cls-1{fill:#005eec} | Data ingestion | SQL stream processing with a Postgres-like experience. 10x faster and more cost-efficient than Apache Flink. | Documentation | | RudderStack | nan | Data ingestion | RudderStack makes it easy to collect and send customer data to the tools and teams that need it | Documentation | | RunReveal | nan | Data ingestion | Ingest and normalize audit logs from any SaaS application into" }, { "data": "Create alerts and detections from scheduled queries. | Website | | Sematext | nan | Data management | Observability monitoring for ClickHouse databases. | Documentation | | SiSense | nan | Data visualization | Embed analytics into any application or workflow | Website | | SigNoz | nan | Data visualization | Open Source Observability Platform | Documentation | | Snappy Flow | nan | Data management | Collects ClickHouse database metrics via plugin. | Documentation | | Soda | nan | Data quality | Soda integration makes it easy for organziations to detect, resolve, and prevent data quality issues by running data quality checks on data before it is loaded into the database. | Website | | StreamingFast | nan | Data ingestion | Blockchain-agnostic, parallelized and streaming-first data engine. | Website | | Streamkap | nan | Data ingestion | Setup real-time CDC (Change Data Capture) streaming to ClickHouse with high throughput in minutes. | Documentation | | Supabase | nan | Data ingestion | Open source Firebase alternative | GitHub,Blog | | Teleport | nan | Secure connection | Teleport Database Service authenticates to ClickHouse using x509 certificates, which are available for the ClickHouse HTTP and Native (TCP) interfaces. | Documentation | | TABLUM.IO | nan | SQL client | TABLUM.IO ingests data from a variety of sources, normalizes and cleans inconsistencies, and gives you access to it via SQL. | Documentation | | Tooljet | nan | Data Visualization | ToolJet is an open-source low-code framework to build and deploy custom internal tools. | Documentation | | Upstash | nan | Data Ingestion | A data platform offering serverless Kafka and other solutions | Website | | Vector | nan | Data ingestion | A lightweight, ultra-fast tool for building observability pipelines with built-in compatibility with ClickHouse. | Documentation | | WarpStream | nan | Data Ingestion | A Kafka compatible data streaming platform built directly on top of object storage | Website | | YepCode | nan | Data integration | YepCode is the integration & automation tool that loves source code. | Documentation | | Zing Data | nan | Data visualization | Simple social business intelligence for ClickHouse, made for iOS, Android and the web. | Documentation | | Name | Logo | Category | Description | Resources | |:--|:|:|:--|:-| | Apache Airflow | nan | Data ingestion | Open-source workflow management platform for data engineering pipelines | Github | | Apache Beam | nan | Data ingestion | Open source, unified model and set of language-specific SDKs for defining and executing data processing workflows. Compatible with Google Dataflow. | Documentation | | Apache InLong | nan | Data ingestion | One-stop integration framework for massive data | Documentation | | Apache Nifi | nan | Data ingestion | Automates the flow of data between software systems | Documentation | | Apache SeaTunnel | nan | Data ingestion | SeaTunnel is a very easy-to-use ultra-high-performance distributed data integration platform | Website | | Apache SkyWalking | nan | Data management | Open-source APM system that provides monitoring, tracing and diagnosing capabilities for distributed systems in Cloud Native" }, { "data": "| Blog | | Apache Spark | Apache Spark logo | Data ingestion | Spark ClickHouse Connector is a high performance connector built on top of Spark DataSource V2. | GitHub,Documentation | | Apache StreamPark | nan | Data ingestion | A stream processing application development framework and stream processing operation platform. | Website | | Bytebase | nan | Data management | Open-source database DevOps tool, it's the GitLab for managing databases throughout the application development lifecycle | Documentation | | C# | nan | Language client | ClickHouse.Client is a feature-rich ADO.NET client implementation for ClickHouse | Documentation | | C++ | nan | Language client | C++ client for ClickHouse | GitHub | | CHProxy | nan | Data management | Chproxy is an HTTP proxy and load balancer for the ClickHouse database | GitHub | | Chat-DBT | nan | AI Integration | Create ClickHouse queries using Chat GPT. | GitHub | | ClickHouse Monitoring Dashboard | nan | Dashboard | A simple monitoring dashboard for ClickHouse | Github | | Common Lisp | nan | Language client | Common Lisp ClickHouse Client Library | GitHub | | DBNet | nan | Software IDE | Web-based SQL IDE using Go as a back-end, and the browser as the front-end. | Github | | DataLens | nan | Data visualization | An open-source data analytics and visualization tool. | Website,Documentation | | Dataease | nan | Data visualization | Open source data visualization analysis tool to help users analyze data and gain insight into business trends. | Website | | Datahub | nan | Data management | Open Source Data Catalog that enables data discovery, data observability and federated governance | Documentation | | Dbmate | nan | Data management | Database migration tool that will keep your database schema in sync across multiple developers and servers | GitHub | | Deepflow | nan | Data ingestion | Application Observability using eBPF | Website | | Easypanel | nan | Deployment method | It's a modern server control panel. You can use it to deploy ClickHouse on your own server. | Website, Documentation | | Explo | nan | Data visualization | Explo helps companies build real-time analytics dashboard by providing flexible components. | Website | | Flink | nan | Data ingestion | Flink sink for ClickHouse database, powered by Async Http Client | GitHub | | Goose | nan | Data migration | A database migration tool that supports SQL migrations and Go functions. | GitHub,Documentation | | Ibis | nan | Language client | The flexibility of Python analytics with the scale and performance of modern SQL | Website | | Jaeger | nan | Data ingestion | Jaeger gRPC storage plugin implementation for storing traces in ClickHouse | GitHub | | JupySQL | nan | SQL client | The native SQL client for Jupyter" }, { "data": "| Documentation | | Kestra | nan | Data orchestration | Open source data orchestration and scheduling platform | Website | | Logchain | nan | Security | Data security and privileged access management | Website | | Meltano | nan | Data ingestion | Meltano is an open-source, full-stack data integration platform | Documentation | | Mprove | nan | Data visualization | Self-service Business Intelligence with Version Control | Website | | Netobserv | nan | Data management | An OpenShift and Kubernetes operator for network observability. | Blog | | Observable | nan | Data visualization | Observable is a platform where you can collaboratively explore, analyze, visualize, and communicate with data on the web. | Website | | OpenTelemetry | nan | Data ingestion | Exporter that supports sending logs, metrics, trace OpenTelemetry data to ClickHouse | GitHub | | PHP | nan | Language client | This extension provides the ClickHouse integration for the Yii framework 2.0 | GitHub | | Pgwarehouse | nan | Data ingestion | Simple tool to quickly replicate Postgres tables into ClickHouse | GitHub | | Pinax | nan | Blockchain analytics | Indexing, analytics, and search tools for blockchains. | Blog | | Pulse | nan | Data management | A developer platform for internal data UIs. | Website | | QStudio | nan | GUI | A simple to use GUI for interacting with ClickHouse databases. | Website | | QStudio | nan | SQL client | qStudio is a free SQL GUI for running SQL scripts and charting results | Documentation | | Qryn | nan | Data Ingestion, Management, Visualization | qryn is a polyglot observability stack built on top of ClickHouse, transparently compatible with Loki, Prometheus, Tempo, Opentelemetry and many other formats and standard APIs without requiring custom clients, code or plugins | Documentation, Github, Website | | RSyslog | nan | Data Ingestion | This module provides native support for logging to ClickHouse. | Documentation | | Rocket.BI | nan | Data visualization | RocketBI is a self-service business intelligence platform that helps you quickly analyze data, build drag-n-drop visualizations and collaborate with colleagues right on your web browser. | GitHub, Documentation | | Ruby | nan | Language client | A modern Ruby database driver for ClickHouse | GitHub | | Rust | nan | Language client | A typed client for ClickHouse | GitHub | | R | nan | Language client | R package is a DBI interface for the ClickHouse database | GitHub | | SQLPad | nan | SQL client | SQLPad is a web app for writing and running SQL queries and visualizing the results | Documentation | | Scala | nan | Language client | ClickHouse Scala Client that uses Akka Http | GitHub | | SchemaSpy | nan | Data visualization | SchemaSpy supports ClickHouse schema visualuzation | GitHub | | Tableau | nan | Data visualization | Interactive data visualization software focused on business intelligence | Documentation | | TricksterCache | trickster-logo | Data visualization | Open Source HTTP Reverse Proxy Cache and Time Series Dashboard Accelerator | Website | | Visual Studio Client | nan | Language client | Visual studio lightweight client | Marketplace | | VulcanSQL | nan | Data API Framework | It's a Data API Framework for data applications that helps data folks create and share data APIs faster. It turns your SQL templates into data APIs. No backend skills required. | Website, Documentation |" } ]
{ "category": "App Definition and Development", "file_name": "integrations.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "clickhouse client is a client application that is used to connect to ClickHouse from the command line. clickhouse local is a client application that is used to query files on disk and across the network. Many of the guides in the ClickHouse documentation will have you examine the schema of a file (CSV, TSV, Parquet, etc.) with clickhouse local, query the file, and even manipulate the data from the file in order to prepare it for insertion into ClickHouse. We will often have you query a file with clickhouse local and pipe the output to clickhouse client to stream the data into ClickHouse. There are example datasets that use both clickhouse client and clickhouse local in the Next Steps section at the end of this document. If you have already installed ClickHouse server locally you may have clickhouse client and clickhouse local installed. Check by running clickhouse client and clickhouse local at the commandline. Otherwise, follow the instructions for your operating system. In Microsoft Windows 10 or 11 with the Windows Subsystem for Linux (WSL) version 2 (WSL 2) you can run Ubuntu Linux, and then run clickhouse client and clickhouse local. Install WSL by following Microsoft's WSL documentation. By running the bash command from your terminal you will enter WSL: ``` bash``` ``` curl https://clickhouse.com/ | sh``` ``` ./clickhouse client``` clickhouse client will try to connect to a local ClickHouse server instance, if you do not have one running it will timeout. See the clickhouse-client docs for examples. ``` ./clickhouse local``` See the NYPD Complaint dataset for example use of both clickhouse-client and clickhouse-local. See the clickhouse-client docs. See the clickhouse-local docs. See the ClickHouse install docs." } ]
{ "category": "App Definition and Development", "file_name": "why-clickhouse-is-so-fast.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "It was designed to be fast. Query execution performance has always been a top priority during the development process, but other important characteristics like user-friendliness, scalability, and security were also considered so ClickHouse could become a real production system. \"Building for Fast\" talk from ClickHouse Meetup Amsterdam, June 2022. \"Secrets of ClickHouse Performance Optimizations\" talk from Big Data Technology Conference, December 2019, offers a more technical take on the same topic. ClickHouse was initially built as a prototype to do just a single task well: to filter and aggregate data as fast as possible. Thats what needs to be done to build a typical analytical report, and thats what a typical GROUP BY query does. The ClickHouse team has made several high-level decisions that, when combined, made achieving this task possible: Column-oriented storage: Source data often contain hundreds or even thousands of columns, while a report can use just a few of them. The system needs to avoid reading unnecessary columns to avoid expensive disk read operations. Indexes: Memory resident ClickHouse data structures allow the reading of only the necessary columns, and only the necessary row ranges of those columns. Data compression: Storing different values of the same column together often leads to better compression ratios (compared to row-oriented systems) because in real data a column often has the same, or not so many different, values for neighboring rows. In addition to general-purpose compression, ClickHouse supports specialized codecs that can make data even more compact. Vectorized query execution: ClickHouse not only stores data in columns but also processes data in columns. This leads to better CPU cache utilization and allows for SIMD CPU instructions usage. Scalability: ClickHouse can leverage all available CPU cores and disks to execute even a single query. Not only on a single server but all CPU cores and disks of a cluster as well. But many other database management systems use similar techniques. What really makes ClickHouse stand out is attention to low-level details. Most programming languages provide implementations for most common algorithms and data structures, but they tend to be too generic to be effective. Every task can be considered as a landscape with various characteristics, instead of just throwing in random implementation. For example, if you need a hash table, here are some key questions to consider: Hash table is a key data structure for GROUP BY implementation and ClickHouse automatically chooses one of 30+ variations for each specific query. The same goes for algorithms, for example, in sorting you might consider: Algorithms that rely on characteristics of data they are working with can often do better than their generic counterparts. If it is not really known in advance, the system can try various implementations and choose the one that works best in runtime. For example, see an article on how LZ4 decompression is implemented in ClickHouse. Last but not least, the ClickHouse team always monitors the Internet on people claiming that they came up with the best implementation, algorithm, or data structure to do something and tries it out. Those claims mostly appear to be false, but from time to time youll indeed find a gem." } ]
{ "category": "App Definition and Development", "file_name": "multi-active-availability.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us CockroachDB's availability model is described as \"Multi-Active Availability.\" In essence, multi-active availability provides benefits similar to traditional notions of high availability, but also lets you read and write from every node in your cluster without generating any conflicts. High availability lets an application continue running even if a system hosting one of its services fails. This is achieved by scaling the application's services horizontally, i.e., replicating the service across many machines or systems. If any one of them fails, the others can simply step in and perform the same service. Before diving into the details of CockroachDB's multi-active availability, we'll review the two most common high availability designs: Active-Passive and Active-Active systems. In active-passive systems, all traffic is routed to a single, \"active\" replica. Changes to the replica's state are then copied to a backup \"passive\" replica, in an attempt to always mirror the active replica as closely as possible. However, this design has downsides: In active-active systems, multiple replicas run identical services, and traffic is routed to all of them. If any replica fails, the others simply handle the traffic that would've been routed to it. For databases, though, active-active replication is incredibly difficult to instrument for most workloads. For example, if you let multiple replicas handle writes for the same keys, how do you keep them consistent? For this example, we have 2 replicas (A, B) in an active-active high availability cluster. Multi-active availability is CockroachDB's version of high availability (keeping your application online in the face of partial failures), which we've designed to avoid the downsides of both active-passive and traditional active-active systems. Like active-active designs, all replicas can handle traffic, including both reads and writes. However, CockroachDB improves upon that design by also ensuring that data remains consistent across them, which we achieve by using \"consensus replication.\" In this design, replication requests are sent to at least 3 replicas, and are only considered committed when a majority of replicas acknowledge that they've received it. This means that you can still have failures without compromising availability. To prevent conflicts and guarantee your data's consistency, clusters that lose a majority of replicas stop responding because they've lost the ability to reach a consensus on the state of your data. When a majority of replicas are restarted, your database resumes operation. For this example, we have 3 CockroachDB nodes (A, B, C) in a multi-active availability cluster. To get a greater understanding of how CockroachDB is a survivable system that enforces strong consistency, check out our architecture documentation. To see Multi-Active Availability in action, see this availability demo. Was this helpful? Was this helpful? Product Resources Learn Support Channels Company Get developer news Thanks!" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. Each transaction guarantees ACID semantics spanning arbitrary tables and rows, even when data is distributed. If a transaction succeeds, all mutations are applied together with virtual simultaneity. If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. By default, CockroachDB guarantees that while a transaction is pending, it is isolated from other concurrent transactions with SERIALIZABLE isolation. For a detailed discussion of CockroachDB transaction semantics, see How CockroachDB Does Distributed Atomic Transactions and Serializable, Lockless, Distributed: Isolation in CockroachDB. The explanation of the transaction model described in this blog post is slightly out of date. See the Transaction Retries section for more details. The following SQL statements control transactions. | Statement | Description | |:-|:| | BEGIN | Initiate a transaction and optionally set its priority, access mode, \"as of\" timestamp, or isolation level. | | COMMIT | Commit a regular transaction, or clear the connection after committing a transaction using the advanced retry protocol. | | RELEASE SAVEPOINT | Commit a nested transaction; also used for retryable transactions. | | ROLLBACK | Abort a transaction and roll the database back to its state before the transaction began. | | ROLLBACK TO SAVEPOINT | Roll back a nested transaction; also used to handle retryable transaction errors. | | SAVEPOINT | Used for nested transactions; also used to implement advanced client-side transaction retries. | | SET TRANSACTION | Set a transaction's priority, access mode, \"as of\" timestamp, or isolation level. | | SHOW | Display the current transaction settings. | If you are using a framework or library that does not have advanced retry logic built in, you should implement an application-level retry loop with exponential backoff. See Client-side retry handling. In CockroachDB, a transaction is set up by surrounding SQL statements with the BEGIN and COMMIT statements. To use advanced client-side transaction retries, you should also include the SAVEPOINT, ROLLBACK TO SAVEPOINT and RELEASE SAVEPOINT statements. ``` BEGIN; SAVEPOINT cockroach_restart; <transaction statements> RELEASE SAVEPOINT cockroach_restart; COMMIT; ``` At any time before it's committed, you can abort the transaction by executing the ROLLBACK statement. Clients using transactions must also include logic to handle retries. To handle errors in transactions, you should check for the following types of server-side errors: | Type | Description | |:-|:-| | Transaction Retry Errors | Errors with the code 40001 and string restart transaction, which indicate that a transaction failed because it could not be placed in a serializable ordering of transactions by CockroachDB. This occurs under SERIALIZABLE isolation and only rarely under READ COMMITTED isolation. For details on transaction retry errors and how to resolve them, see the Transaction Retry Error Reference. | | Ambiguous Errors | Errors with the code 40003 which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or" }, { "data": "How you handle these errors depends on how you want to resolve the ambiguity. For information about how to handle ambiguous errors, see here. | | SQL Errors | All other errors, which indicate that a statement in the transaction failed. For example, violating the UNIQUE constraint generates a 23505 error. After encountering these errors, you can either issue a COMMIT or ROLLBACK to abort the transaction and revert the database to its state before the transaction began.If you want to attempt the same set of statements again, you must begin a completely new transaction. | Transactions may require retries due to contention with another concurrent or recent transaction attempting to write to the same data. There are two cases in which transaction retries can occur: To reduce the need for transaction retries, see Reduce transaction contention. CockroachDB automatically retries individual statements (implicit transactions) and transactions sent from the client as a single batch, as long as the size of the results being produced for the client, including protocol overhead, is less than 16KiB by default. Once that buffer overflows, CockroachDB starts streaming results back to the client, at which point automatic retries cannot be performed any more. As long as the results of a single statement or batch of statements are known to stay clear of this limit, the client does not need to worry about transaction retries. You can increase the occurrence of automatic retries as a way to minimize transaction retry errors: Use ALTER ROLE ALL SET {sessionvar} = {val} instead of the sql.defaults. cluster settings. This allows you to set a default value for all users for any session variable that applies during login, making the sql.defaults. cluster settings redundant. Individual statements are treated as implicit transactions, and so they fall under the rules described above. If the results are small enough, they will be automatically retried. In particular, INSERT/UPDATE/DELETE statements without a RETURNING clause are guaranteed to have minuscule result sizes. For example, the following statement would be automatically retried by CockroachDB: ``` DELETE FROM customers WHERE id = 1; ``` Transactions can be sent from the client as a single batch. Batching implies that CockroachDB receives multiple statements without being asked to return results in between them; instead, CockroachDB returns results after executing all of the statements, except when the accumulated results overflow the buffer mentioned above, in which case they are returned sooner and automatic retries can no longer be performed. Batching is generally controlled by your driver or client's behavior. Technically, it can be achieved in two ways, both supporting automatic retries: When the client/driver is using the PostgreSQL Extended Query protocol, a batch is made up of all queries sent in between two Sync messages. Many drivers support such batches through explicit batching constructs. When the client/driver is using the PostgreSQL Simple Query protocol, a batch is made up of semicolon-separated strings sent as a unit to CockroachDB. For example, in Go, this code would send a single batch (which would be automatically retried): ``` db.Exec( \"BEGIN; DELETE FROM customers WHERE id = 1; DELETE orders WHERE customer = 1; COMMIT;\" ) ``` Within a batch of statements, CockroachDB infers that the statements are not conditional on the results of previous statements, so it can retry all of them. Of course, if the transaction relies on conditional logic" }, { "data": "statement 2 is executed only for some results of statement 1), then the transaction cannot be all sent to CockroachDB as a single batch. In these common cases, CockroachDB cannot retry, say, statement 2 in isolation. Since results for statement 1 have already been delivered to the client by the time statement 2 is forcing the transaction to retry, the client needs to be involved in retrying the whole transaction and so you should write your transactions to use client-side retry handling. The enableimplicittransactionforbatch_statements session variable defaults to true. This means that any batch of statements is treated as an implicit transaction, so the BEGIN/COMMIT commands are not needed to group all the statements in one transaction. In the event bounded staleness reads are used along with either the withmintimestamp function or the withmaxstaleness function and the nearest_only parameter is set to true, the query will throw an error if it can't be served by a nearby replica. CockroachDB supports the nesting of transactions using savepoints. These nested transactions are also known as sub-transactions. Nested transactions can be rolled back without discarding the state of the entire surrounding transaction. This can be useful in applications that abstract database access using an application development framework or ORM. Different components of the application can operate on different sub-transactions without having to know about each others' internal operations, while trusting that the database will maintain isolation between sub-transactions and preserve data integrity. Just as COMMIT and ROLLBACK are used to commit and discard entire transactions, respectively, RELEASE SAVEPOINT and ROLLBACK TO SAVEPOINT are used to commit and discard nested transactions. This relationship is shown in the following table: | Statement | Effect | |:-|:| | COMMIT | Commit an entire transaction. | | ROLLBACK | Discard an entire transaction. | | RELEASE SAVEPOINT | Commit (really, forget) the named nested transaction. | | ROLLBACK TO SAVEPOINT | Discard the changes in the named nested transaction. | For more information, including examples showing how to use savepoints to create nested transactions, see the savepoints documentation. Every transaction in CockroachDB is assigned an initial priority. By default, the transaction priority is NORMAL. Cockroach Labs recommends leaving the transaction priority at the default setting in almost all cases. Changing the transaction priority to HIGH in particular can lead to difficult-to-debug interactions with other transactions executing on the system. If you are setting a transaction priority to avoid contention or hot spots, or to get better query performance, it is usually a sign that you need to update your schema design and/or review the data access patterns of your workload. For transactions that you are absolutely sure should be given higher or lower priority, you can set the priority in the BEGIN statement: ``` BEGIN PRIORITY <LOW | NORMAL | HIGH>; ``` You can also set the priority immediately after a transaction is started: ``` SET TRANSACTION PRIORITY <LOW | NORMAL | HIGH>; ``` To set the default transaction priority for all transactions in a session, use the defaulttransactionpriority session variable. For example: ``` SET defaulttransactionpriority = 'low'; ``` transaction_priority is a read-only session" }, { "data": "To view the current priority of a transaction, use SHOW transaction_priority or SHOW TRANSACTION PRIORITY: ``` SHOW transaction_priority; ``` ``` transaction_priority low ``` ``` SHOW TRANSACTION PRIORITY; ``` ``` transaction_priority low ``` Isolation is an element of ACID transactions that determines how concurrency is controlled, and ultimately guarantees consistency. CockroachDB offers two transaction isolation levels: SERIALIZABLE and READ COMMITTED. By default, CockroachDB executes all transactions at the strongest ANSI transaction isolation level: SERIALIZABLE, which permits no concurrency anomalies. To place all transactions in a serializable ordering, SERIALIZABLE isolation may require transaction restarts and client-side retry handling. For a demonstration of how SERIALIZABLE prevents anomalies such as write skew, see Serializable Transactions. CockroachDB can be configured to execute transactions at READ COMMITTED instead of SERIALIZABLE isolation. If enabled, READ COMMITTED is no longer an alias for SERIALIZABLE . READ COMMITTED permits some concurrency anomalies in exchange for minimizing transaction aborts and removing the need for client-side retries. Depending on your workload requirements, this may be desirable. For more information, see Read Committed Transactions. CockroachDB uses slightly different isolation levels than ANSI SQL isolation levels. SNAPSHOT, READ UNCOMMITTED, READ COMMITTED, and REPEATABLE READ are aliases for SERIALIZABLE. If READ COMMITTED isolation is enabled using the sql.txn.readcommittedisolation.enabled cluster setting, READ COMMITTED is no longer an alias for SERIALIZABLE, and READ UNCOMMITTED becomes an alias for READ COMMITTED. The CockroachDB SERIALIZABLE isolation level is stronger than the ANSI SQL READ UNCOMMITTED, READ COMMITTED, and REPEATABLE READ levels and equivalent to the ANSI SQL SERIALIZABLE level. The CockroachDB READ COMMITTED isolation level is stronger than the PostgreSQL READ COMMITTED isolation level, and is the strongest isolation level that does not experience serialization errors that require client-side handling. For more information about the relationship between these levels, see A Critique of ANSI SQL Isolation Levels. You can limit the number of rows written or read in a transaction at the cluster or session level. This allows you configure CockroachDB to log or reject statements that could destabilize a cluster or violate application best practices. When the transactionrowsread_err session setting is enabled, transactions that read more than the specified number of rows will fail. In addition, the optimizer will not create query plans with scans that exceed the specified row limit. For example, to set a default value for all users at the cluster level: ``` ALTER ROLE ALL SET transactionrowsread_err = 1000; ``` When the transactionrowswritten_err session setting is enabled, transactions that write more than the specified number of rows will fail. For example, to set a default value for all users at the cluster level: ``` ALTER ROLE ALL SET transactionrowswritten_err = 1000; ``` To assess the impact of configuring these session settings, use the corresponding session settings transactionrowsreadlog and transactionrowswrittenlog to log transactions that read or write the specified number of rows. Transactions are logged to the SQL_PERF channel. The limits are enforced after each statement of a transaction has been fully executed. The \"write\" limits apply to INSERT, INSERT INTO SELECT FROM, INSERT ON CONFLICT, UPSERT, UPDATE, and DELETE SQL statements. The \"read\" limits apply to the SELECT statement in addition to the statements subject to the \"write\" limits. The limits do not apply to CREATE TABLE AS, IMPORT, TRUNCATE, DROP, ALTER TABLE, BACKUP, RESTORE, or CREATE STATISTICS statements. Enabling transactionrowsread_err disables a performance optimization for mutation statements in implicit transactions where CockroachDB can auto-commit without additional network round trips. Was this helpful? Was this helpful? Product Resources Learn Support Channels Company Get developer news Thanks!" } ]
{ "category": "App Definition and Development", "file_name": "multiregion-overview.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us This page provides an overview of CockroachDB multi-region capabilities. CockroachDB multi-region capabilities make it easier to run global applications. To use these capabilities effectively, you should understand the following concepts: At a high level, the simplest process for running a multi-region cluster is: These steps describe the simplest case, where you accept all of the default settings. The latter two steps are optional, but table locality and survival goals have a significant impact on performance. Therefore Cockroach Labs recommends that you give these aspects some consideration when you choose a multi-region configuration. For new clusters using the multi-region SQL abstractions, Cockroach Labs recommends lowering the --max-offset setting to 250ms. This setting is especially helpful for lowering the write latency of global tables. Nodes can run with different values for --max-offset, but only for the purpose of updating the setting across the cluster using a rolling upgrade. You define a cluster region at the node level using the region key and the zone using the zone key in the node startup locality options. For example, the following command adds us-east-1 to the list of cluster regions and us-east-1b to the list of zones: ``` cockroach start --locality=region=us-east-1,zone=us-east-1b # ... other required flags go here ``` To show all of a cluster's regions, execute the following SQL statement: ``` SHOW REGIONS FROM CLUSTER; ``` A database region is a high-level abstraction for a geographic region. Each region is broken into multiple zones. These terms are meant to correspond directly to the region and zone terminology used by cloud providers. The regions added during node startup become database regions when you add them to a database. To add the first region, use the ALTER DATABASE ... PRIMARY REGION statement. While the database has only one region assigned to it, it is considered a \"multi-region database.\" This means that all data in that database is stored within its assigned regions, and CockroachDB optimizes access to the database's data from the primary region. If the default survival goals and table localities meet your needs, there is nothing else you need to do once you have set a database's primary region. To add another database region, use the ALTER DATABASE ... ADD REGION statement. To show all of a database's regions, execute the SHOW REGIONS FROM DATABASE statement. If the default survival goals and table localities meet your needs, there is nothing else you need to do once you have set a database's primary region. This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support. Super regions allow you to define a set of database regions such that the following schema objects will have all of their replicas stored only in regions that are members of the super region: The primary use case for super regions is data domiciling. As mentioned above, data from regional and regional by row tables will be stored only in regions that are members of the super region. Further, if the super region contains 3 or more regions and if you use REGION survival goals, the data domiciled in the super region will remain available if you lose a" }, { "data": "To use super regions, keep the following considerations in mind: For more information about how to enable and use super regions, see: Note that super regions take a different approach to data domiciling than ALTER DATABASE ... PLACEMENT RESTRICTED. Specifically, super regions make it so that all replicas (both voting and non-voting) are placed within the super region, whereas PLACEMENT RESTRICTED makes it so that there are no non-voting replicas. For a demo on Super Regions, watch the following video: For more information about data domiciling using PLACEMENT RESTRICTED, see Data Domiciling with CockroachDB. If you want to do data domiciling for databases with region survival goals using the higher-level multi-region abstractions, you must use super regions. Using ALTER DATABASE ... PLACEMENT RESTRICTED will not work for databases that are set up with region survival goals. Super regions rely on the underlying replication zone system, which was historically built for performance, not for domiciling. The replication system's top priority is to prevent the loss of data and it may override the zone configurations if necessary to ensure data durability. For more information, see Replication Controls. If you are using super regions in your cluster, there are additional constraints when using secondary regions: Secondary regions allow you to define a database region that will be used for failover in the event your primary region goes down. In other words, the secondary region will act as the primary region if the original primary region fails. Secondary regions work as follows: when a secondary region is added to a database, a lease preference is added to the tables and indexes in that database to ensure that two voting replicas are moved into the secondary region. This behavior is an improvement over versions of CockroachDB prior to v22.2. In those versions, when the primary region failed, the leaseholders would be transferred to another database region at random, which could have negative effects on performance. For more information about how to use secondary regions, see: If you are using super regions in your cluster, there are additional constraints when using secondary regions: A survival goal dictates how many simultaneous failure(s) a database can survive. All tables within the same database operate with the same survival goal. Each database can have its own survival goal setting. For more information, refer to Multi-Region Survival Goals. Table locality determines how CockroachDB optimizes access to the table's data. Every table in a multi-region database has a \"table locality setting\" that configures one or more home regions at the table or row level. For more information, refer to Table Localities. The features listed in this section make working with multi-region clusters easier. Zone Config Extensions are a customization tool for advanced users to persistently modify the configuration generated by the standard multi-region SQL abstractions on a per-region basis. For more information, see Zone Config Extensions. To reduce latency while making online schema changes, we recommend specifying a lease_preference zone configuration on the system database to a single region and running all subsequent schema changes from a node within that region. For example, if the majority of online schema changes come from machines that are geographically close to us-east1, run the following: ``` ALTER DATABASE system CONFIGURE ZONE USING constraints = '{\"+region=us-east1\": 1}', lease_preferences = '[[+region=us-east1]]'; ``` Run all subsequent schema changes from a node in the specified region. If you do not intend to run more schema changes from that region, you can safely remove the lease preference from the zone configuration for the system database. Was this helpful? Was this helpful? Product Resources Learn Support Channels Company Get developer" } ]
{ "category": "App Definition and Development", "file_name": "migration-overview.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us This page provides an overview of how to migrate a database to CockroachDB. If you need to migrate data from a CockroachDB Serverless cluster to a CockroachDB Dedicated cluster, see Migrate data from Serverless to Dedicated. A database migration broadly consists of the following phases: If you need help migrating to CockroachDB, contact our sales team. Consider the following as you plan your migration: Create a document that summarizes the intent of the migration, the technical details, and the team members involved. A primary consideration is whether your application can tolerate downtime: Take the following two use cases: If your application can tolerate downtime, then it will likely be easiest to take your application offline, load a snapshot of the data into CockroachDB, and perform a cutover to CockroachDB once the data is migrated. This is known as a lift-and-shift migration. A lift-and-shift approach is the most straightforward. However, it's important to fully prepare the migration in order to be certain that it can be completed successfully during the downtime window. Scheduled downtime is made known to your users in advance. Once you have prepared for the migration, you take the application offline, conduct the migration, and bring the application back online on CockroachDB. To succeed, you should estimate the amount of downtime required to migrate your data, and ideally schedule the downtime outside of peak hours. Scheduling downtime is easiest if your application traffic is \"periodic\", meaning that it varies by the time of day, day of week, or day of month. Unscheduled downtime impacts as few customers as possible, ideally without impacting their regular usage. If your application is intentionally offline at certain times (e.g., outside business hours), you can migrate the data without users noticing. Alternatively, if your application's functionality is not time-sensitive (e.g., it sends batched messages or emails), then you can queue requests while your system is offline, and process those requests after completing the migration to CockroachDB. Reduced functionality takes some, but not all, application functionality offline. For example, you can disable writes but not reads while you migrate the application data, and queue data to be written after completing the migration. For an overview of lift-and-shift migrations to CockroachDB, see Lift and Shift. If your application cannot tolerate downtime, then you should aim for a \"zero-downtime\" approach. This reduces downtime to an absolute minimum, such that users do not notice the migration. The minimum possible downtime depends on whether you can tolerate inconsistency in the migrated data: Migrations performed using consistent cutover reduce downtime to an absolute minimum (i.e., seconds or sub-seconds) while keeping data synchronized between the source database and CockroachDB. Consistency requires downtime. In this approach, downtime occurs right before cutover, as you drain the remaining transactions from the source database to CockroachDB. Migrations performed using immediate cutover can reduce downtime to zero. These require the most preparation, and typically allow read/write traffic to both databases for at least a short period of time, sacrificing consistency for availability. Without stopping application traffic, you perform an immediate cutover, while assuming that some writes will not be replicated to CockroachDB. You may want to manually reconcile these data inconsistencies after switching over. For an overview of zero-downtime migrations to CockroachDB, see Zero" }, { "data": "Cutover is the process of switching application traffic from the source database to CockroachDB. Consider the following: Will you perform the cutover all at once, or incrementally (e.g., by a subset of users, workloads, or tables)? Will you have a fallback plan that allows you to reverse (\"roll back\") the migration from CockroachDB to the source database? A fallback plan enables you to fix any issues or inconsistencies that you encounter during or after cutover, then retry the migration. This is the simplest cutover method, since you won't need to develop and execute a fallback plan. As part of migration preparations, you will have already tested your queries and performance to have confidence to migrate without a rollback option. After moving all of the data from the source database to CockroachDB, you switch application traffic to CockroachDB. This method adds a fallback plan to the simple all-at-once cutover. In addition to moving data to CockroachDB, data is also replicated from CockroachDB back to the source database in case you need to roll back the migration. Continuous replication is already possible when performing a zero-downtime migration that dual writes to both databases. Otherwise, you will need to ensure that data is replicated in the reverse direction at cutover. The challenge is to find a point at which both the source database and CockroachDB are in sync, so that you can roll back to that point. You should also avoid falling into a circular state where updates continuously travel back and forth between the source database and CockroachDB. Also known as the \"strangler fig\" approach, a phased rollout migrates a portion of your users, workloads, or tables over time. Until all users, workloads, and/or tables are migrated, the application will continue to write to both databases. This approach enables you to take your time with the migration, and to pause or roll back as you monitor the migration for issues and performance. Rolling back the migration involves the same caveats and considerations as for the all-at-once method. Because you can control the blast radius of your migration by routing traffic for a subset of users or services, a phased rollout has reduced business risk and user impact at the cost of increased implementation risk. You will need to figure out how to migrate in phases while ensuring that your application is unaffected. If you need help migrating to CockroachDB, contact our sales team. Determine the size of the target CockroachDB cluster. To do this, consider your data volume and workload characteristics: Use this information to size the CockroachDB cluster you will create. If you are migrating to a CockroachDB Cloud cluster, see Plan Your Cluster for details: If you are migrating to a CockroachDB Self-Hosted cluster: As you develop your migration plan, consider the application changes that you will need to make. These may relate to the following: Follow these recommendations when converting your schema for compatibility with CockroachDB. The Schema Conversion Tool automatically identifies potential improvements to your schema. You should define an explicit primary key on every table. For more information, see Primary key best practices. Do not use a sequence to define a primary key column. Instead, Cockroach Labs recommends that you use multi-column primary keys or auto-generating unique IDs for primary key columns. By default on CockroachDB, INT is an alias for INT8, which creates 64-bit signed integers. Depending on your source database or application requirements, you may need to change the integer size to 4. For example, PostgreSQL defaults to 32-bit integers. For more information, see Considerations for 64-bit signed" }, { "data": "Review the best practices for creating secondary indexes on CockroachDB. We discourage indexing on sequential keys. If a table must be indexed on sequential keys, use hash-sharded indexes. Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hot spots and improving write performance on sequentially-keyed indexes at a small cost to read performance. Optimize your queries against transaction contention. You may encounter transaction retry errors when you test application queries, as well as transaction contention due to long-running transactions when you conduct the migration and bulk load data. Transaction retry errors are more frequent under CockroachDB's default SERIALIZABLE isolation level. If you are migrating an application that was built at a READ COMMITTED isolation level, you should first enable READ COMMITTED isolation on the CockroachDB cluster for compatibility. Update your queries to resolve differences in functionality and SQL syntax. The Schema Conversion Tool automatically flags syntax incompatibilities and unimplemented features in your schema. CockroachDB supports the PostgreSQL wire protocol and is largely compatible with PostgreSQL syntax. However, the following PostgreSQL features do not yet exist in CockroachDB: Drop primary key. Each table must have a primary key associated with it. You can drop and add a primary key constraint within a single transaction. XML functions. Column-level privileges. XA syntax. Creating a database from a template. Dropping a single partition from a table. Foreign data wrappers. Advisory Lock Functions (although some functions are defined with no-op implementations). If your source database uses any of the preceding features, you may need to implement workarounds in your schema design, in your data manipulation language (DML), or in your application code. For more details on the CockroachDB SQL implementation, see SQL Feature Support. Once you have a migration plan, prepare the team, application, source database, and CockroachDB cluster for the migration. This step is optional. To minimize issues after cutover, compose a migration \"pre-mortem\": This step is optional. Based on the error budget you defined in your migration plan, identify the metrics that you can use to measure your success criteria and set up monitoring for the migration. These metrics may be identical to those you normally use in production, but can also be specific to your migration needs. In the following order: You can use the following MOLT (Migrate Off Legacy Technology) tools to simplify these steps: First, convert your database schema to an equivalent CockroachDB schema: Use the Schema Conversion Tool to convert your schema line-by-line. This requires a free CockroachDB Cloud account. The tool will convert the syntax, identify unimplemented features and syntax incompatibilities in the schema, and suggest edits according to CockroachDB best practices. Alternatively, manually convert the schema according to our schema design best practices. You can also export a partially converted schema from the Schema Conversion Tool to finish the conversion manually. Then import the converted schema to a CockroachDB cluster: Before moving data, Cockroach Labs recommends dropping any indexes on the CockroachDB database. The indexes can be recreated after the data is loaded. Doing so will optimize performance. After converting the schema, load your data into CockroachDB so that you can test your application queries. Then use MOLT Fetch to move the source data to CockroachDB. Alternatively, you can use one of the following methods to migrate the data. Additional tooling may be required to extract or convert the data to a supported file format. After you load the test data, validate your queries on CockroachDB. You can do this by shadowing or by manually testing the queries. Note that CockroachDB defaults to the SERIALIZABLE transaction isolation" }, { "data": "If you are migrating an application that was built at a READ COMMITTED isolation level on the source database, you must enable READ COMMITTED isolation on the CockroachDB cluster for compatibility. You can \"shadow\" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then validate the queries on CockroachDB for consistency, performance, and potential issues with the migration. The CockroachDB Live Migration Service (MOLT LMS) can perform shadowing. This is intended only for testing or performing a dry run. Shadowing should not be used in production when performing a live migration. You can manually validate your queries by testing a subset of \"critical queries\" on an otherwise idle CockroachDB cluster: Check the application logs for error messages and the API response time. If application requests are slower than expected, use the SQL Activity page on the CockroachDB Cloud Console or DB Console to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for SQL performance. Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use MOLT Verify. Test performance on a CockroachDB cluster that is appropriately sized for your workload: Run the application with single- or very low-concurrency and verify the app's performance is acceptable. The cluster should be provisioned with more than enough resources to handle this workload, because you need to verify that the queries will be fast enough when there are zero resource bottlenecks. Run stress tests with at least the production concurrency and rate, but ideally higher in order to verify that the system can handle unexpected spikes in load. This can also uncover contention issues that will appear during spikes in app load, which may require application design changes to avoid. To further minimize potential surprises when you conduct the migration, practice cutover using your application and similar volumes of data on a \"dry-run\" environment. Use a test or development environment that is as similar as possible to production. Performing a dry run is highly recommended. In addition to demonstrating how long the migration may take, a dry run also helps to ensure that team members understand what they need to do during the migration, and that changes to the application are coordinated. Before proceeding, double-check that you are prepared to migrate. Once you are ready to migrate, optionally drop the database and delete the test cluster so that you can get a clean start: ``` DROP DATABASE {database-name} CASCADE; ``` Alternatively, truncate each table you used for testing to avoid having to recreate your schema: ``` TRUNCATE {table-name} CASCADE; ``` Migrate your data to CockroachDB using the method that is appropriate for your downtime requirements and cutover strategy. Using this method, consistency is achieved by only performing the cutover once all writes have been replicated from the source database to CockroachDB. This requires downtime during which the application traffic is stopped. The following is a high-level overview of the migration steps. For considerations and details about the pros and cons of this approach, see Migration Strategy: Lift and Shift. During a \"live migration\", downtime is minimized by performing the cutover while writes are still being replicated from the source database to CockroachDB. Inconsistencies are resolved through manual reconciliation. The following is a high-level overview of the migration steps. The two approaches are mutually exclusive, and each has tradeoffs. To prioritize consistency and minimize downtime: To achieve zero downtime with inconsistency: After you have successfully conducted the migration: Was this helpful? Was" } ]
{ "category": "App Definition and Development", "file_name": "overview.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us CockroachDB is the SQL database for building global, scalable cloud services that survive disasters. Run a multi-node CockroachDB cluster locally Examples that show you how to build a simple \"Hello World\" application with CockroachDB Answers to frequently asked questions CockroachDB Get Started Develop Deploy Migrate Troubleshoot Reference FAQs Releases Was this helpful? Product Resources Learn Support Channels Company Get developer news Thanks!" } ]
{ "category": "App Definition and Development", "file_name": "transactions.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL datatabase designed for speed, scale,and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us CockroachDB's availability model is described as \"Multi-Active Availability.\" In essence, multi-active availability provides benefits similar to traditional notions of high availability, but also lets you read and write from every node in your cluster without generating any conflicts. High availability lets an application continue running even if a system hosting one of its services fails. This is achieved by scaling the application's services horizontally, i.e., replicating the service across many machines or systems. If any one of them fails, the others can simply step in and perform the same service. Before diving into the details of CockroachDB's multi-active availability, we'll review the two most common high availability designs: Active-Passive and Active-Active systems. In active-passive systems, all traffic is routed to a single, \"active\" replica. Changes to the replica's state are then copied to a backup \"passive\" replica, in an attempt to always mirror the active replica as closely as possible. However, this design has downsides: In active-active systems, multiple replicas run identical services, and traffic is routed to all of them. If any replica fails, the others simply handle the traffic that would've been routed to it. For databases, though, active-active replication is incredibly difficult to instrument for most workloads. For example, if you let multiple replicas handle writes for the same keys, how do you keep them consistent? For this example, we have 2 replicas (A, B) in an active-active high availability cluster. Multi-active availability is CockroachDB's version of high availability (keeping your application online in the face of partial failures), which we've designed to avoid the downsides of both active-passive and traditional active-active systems. Like active-active designs, all replicas can handle traffic, including both reads and writes. However, CockroachDB improves upon that design by also ensuring that data remains consistent across them, which we achieve by using \"consensus replication.\" In this design, replication requests are sent to at least 3 replicas, and are only considered committed when a majority of replicas acknowledge that they've received it. This means that you can still have failures without compromising availability. To prevent conflicts and guarantee your data's consistency, clusters that lose a majority of replicas stop responding because they've lost the ability to reach a consensus on the state of your data. When a majority of replicas are restarted, your database resumes operation. For this example, we have 3 CockroachDB nodes (A, B, C) in a multi-active availability cluster. To get a greater understanding of how CockroachDB is a survivable system that enforces strong consistency, check out our architecture documentation. To see Multi-Active Availability in action, see this availability demo. Was this helpful? Was this helpful? Product Resources Learn Support Channels Company Get developer news Thanks!" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Couchbase", "subcategory": "Database" }
[ { "data": "Couchbase is the modern database for enterprise applications. Couchbase is a distributed document database with a powerful search engine and in-built operational and analytical capabilities. It brings the power of NoSQL to the edge and provides fast, efficient bidirectional synchronization of data between the edge and the cloud. Find the documentation, samples, and references to help you use Couchbase and build applications. ``` // List the schedule of flights from Boston // to San Francisco on JETBLUE SELECT DISTINCT airline.name, route.schedule FROM `travel-sample`.inventory.route JOIN `travel-sample`.inventory.airline ON KEYS route.airlineid WHERE route.sourceairport = \"BOS\" AND route.destinationairport = \"SFO\" AND airline.callsign = \"JETBLUE\";``` Explore Couchbase Capella, our fully-managed database as a service offering. Take the complexity out of deploying, managing, scaling, and securing Couchbase in the public cloud. Store, query, and analyze any amount of dataand let us handle more of the administrationall in a few clicks. Couchbase Capella Explore Couchbase Server, a modern, distributed document database with all the desired capabilities of a relational database and more. It exposes a scale-out, key-value store with managed cache for sub-millisecond data operations, purpose-built indexers for efficient queries, and a powerful query engine for executing SQL-like queries. Couchbase Server Couchbase Mobile brings the power of NoSQL to the edge. The combination of Sync Gateway and Couchbase Lite coupled with the power of Couchbase Server provides fast, efficient bidirectional synchronization of data between the edge and the cloud. Enabling you to deploy your offline-first mobile and embedded applications with greater agility on premises or in any cloud. Couchbase Lite | Sync Gateway Couchbase SDKs allow applications to access a Couchbase cluster and the big data Connectors enable data exchange with other platforms. Developer Docs Use the command-line interface (CLI) tools and REST API to manage and monitor your Couchbase" }, { "data": "Couchbase CLI | REST API A modern shell to interact with Couchbase Server and Cloud, now available. Couchbase Shell public Beta is now available. Explore Couchbase Shell Explore a variety of resources - sample apps, videos, blogs, and more, to build applications using Couchbase. Developer Portal Developer Tutorials Explore extensive hands-on learning experiences through free, online courses or under the guidance of an in-person instructor. Academy With open source roots, Couchbase has a rich history of collaboration and community. Connect with our developer community and get involved. Community | Cloud | Server | SDK and Connectors | Mobile | |:|:|:--|:-| | Couchbase Capella | Couchbase Server Couchbase Autonomous Operator Couchbase Service Broker Couchbase Monitoring and Observability Stack | Couchbase Java SDK Couchbase Scala SDK Couchbase .NET SDK Couchbase C SDK Couchbase Node.js SDK Couchbase PHP SDK Couchbase Python SDK Couchbase Ruby SDK Couchbase Go SDK Couchbase Kotlin SDK Couchbase Elasticsearch Connector Couchbase Kafka Connector Couchbase Spark Connector | Couchbase Lite JavaScript Couchbase Lite C# Couchbase Lite Java Couchbase Lite Java Android Couchbase Lite Swift Couchbase Lite Objective-C Couchbase Sync Gateway | Couchbase Capella Couchbase Server Couchbase Autonomous Operator Couchbase Service Broker Couchbase Monitoring and Observability Stack Couchbase Java SDK Couchbase Scala SDK Couchbase .NET SDK Couchbase C SDK Couchbase Node.js SDK Couchbase PHP SDK Couchbase Python SDK Couchbase Ruby SDK Couchbase Go SDK Couchbase Kotlin SDK Couchbase Elasticsearch Connector Couchbase Kafka Connector Couchbase Spark Connector Couchbase Lite JavaScript Couchbase Lite C# Couchbase Lite Java Couchbase Lite Java Android Couchbase Lite Swift Couchbase Lite Objective-C Couchbase Sync Gateway Provide feedback, and get help with any problem you may encounter. Provide Feedback Couchbase Support provides online support for customers of Enterprise Edition who have a support contract. Contact Couchbase You can submit simple changes, such as typo fixes and minor clarifications directly on GitHub. Contributions are greatly encouraged. Contribute to the Documentation" } ]
{ "category": "App Definition and Development", "file_name": "sdk.html.md", "project_name": "Couchbase", "subcategory": "Database" }
[ { "data": "Couchbase provides several SDKs to allow applications to access a Couchbase cluster (Capella or self-managed), as well as Couchbase Litean embedded, NoSQL JSON Document Style database for your mobile apps. To exchange data with other platforms, we offer various Big Data Connectors. ``` val json = JsonObject(\"foo\" -> \"bar\", \"baz\" -> \"qux\") collection.reactive.upsert(\"document-key\", json) .doOnError(err => println(s\"Error during upsert: ${err}\")) .doOnNext(_ => println(\"Success\")) .subscribe()``` The Couchbase SDKs allow applications to access a Couchbase cluster. They offer traditional synchronous APIs as well as scalable asynchronous APIs to maximize performance. | SDK | Documentation | Hello World Example | API Reference | |:|:-|:|:-| | C SDK | Docs | C Getting Started | C API Reference | | .NET SDK | Docs | .NET Getting Started | .NET API Reference | | Go SDK | Docs | Go Getting Started | Go API Reference | | Java SDK | Docs | Java Getting Started | Java API Reference | | Kotlin SDK | Docs | Kotlin Getting Started | Kotlin API Reference | | Node.js SDK | Docs | Node.js Getting Started | Node.js API Reference | | PHP SDK | Docs | PHP Getting Started | PHP API Reference | | Python SDK | Docs | Python Getting Started | Python API Reference | | Ruby SDK | Docs | Ruby Getting Started | Ruby API Reference | | Scala SDK | Docs | Scala Getting Started | Scala API Reference | C SDK Docs C Getting Started C API Reference .NET SDK Docs .NET Getting Started .NET API Reference Go SDK Docs Go Getting Started Go API Reference Java SDK Docs Java Getting Started Java API Reference Kotlin SDK Docs Kotlin Getting Started Kotlin API Reference Node.js SDK Docs Node.js Getting Started Node.js API Reference PHP SDK Docs PHP Getting Started PHP API Reference Python SDK Docs Python Getting Started Python API Reference Ruby SDK Docs Ruby Getting Started Ruby API Reference Scala SDK Docs Scala Getting Started Scala API Reference The SDK Extension Libraries are shipped as separate" }, { "data": "Distributed ACID Transactions are operations that ensure that when multiple documents need to be modified such that only the successful modification of all justifies the modification of any, either all the modifications do occur successfully; or none of them occurs. Distributed ACID Transactions Fields within a JSON document can be securely encrypted by the SDK to support FIPS 140-2 compliance. This is a client-side implementation, with encryption and decryption handled by the Couchbase client SDK. Field Level Encryption Health indicators can tell you a lot about the performance of an application. Monitoring them is vital both during its development and production lifecycle. For a database, performance is best encapsulated via per-request performance. Response Time Observability | Transaction Library | Documentation | API Reference | |:-|:-|:-| | Java | Docs | API Reference | | C# Transactions | Docs | API Reference | | C++ Transactions | Docs | API Reference | Java Docs API Reference C# Transactions Docs API Reference C++ Transactions Docs API Reference SDK doctor is a tool to diagnose application-server-side connectivity issues with your Couchbase Cluster. SDK doctor | Mobile Platform | Documentation | API Reference | |:-|:-|:-| | Couchbase Lite Java Android | Docs | API Reference | | Couchbase Lite C# | Docs | API Reference | | Couchbase Lite Java | Docs | API Reference | | Couchbase Lite Objective-C | Docs | API Reference | | Couchbase Lite Swift | Docs | API Reference | | Couchbase Lite JavaScript | Docs | nan | Couchbase Lite Java Android Docs API Reference Couchbase Lite C# Docs API Reference Couchbase Lite Java Docs API Reference Couchbase Lite Objective-C Docs API Reference Couchbase Lite Swift Docs API Reference Couchbase Lite JavaScript Docs Get Started Configuration Migrating from Elasticsearch Plug-in Get Started Source Configuration Sink Configuration Sample Application with Kafka Steams Get Started Development Workflow Java API ODBC and JDBC drivers enable any application based on the ODBC/JDBC standards, for example Microsoft Excel, QlikView, SAP Lumira, or Tableau, to connect to a Couchbase Server or cluster. ODBC and JDBC Drivers In addition to the Couchbase Support Team, help can be found from the community in our forums, and on our official Couchbase Discord server. Information on some 3rd-party SDK integrations, such as Spring Data, can be found in the SDK docs. The developer bootstrap exercises and other tutorials highlight the use of Couchbase SDKs in the stacks you are most likely to use in development, such as Spring Data, Node Ottoman, and Python Flask." } ]
{ "category": "App Definition and Development", "file_name": "mobile.html.md", "project_name": "Couchbase", "subcategory": "Database" }
[ { "data": "Couchbase Mobile brings the power of NoSQL to the edge. The combination of Sync Gateway and Couchbase Lite coupled with the power of Couchbase Server provides fast, efficient bidirectional synchronization of data between the edge and the cloud. Enabling you to deploy your offline-first mobile and embedded applications with greater agility on premises or in any cloud. Couchbase Lite is an embedded, NoSQL JSON Document Style database for your mobile apps. It natively supports all major operating systems and platforms. Its NoSQL client database provides CRUD, full-text search and query capabilities that runs locally on the device. Go to Couchbase Lite Docs Sync Gateway is an internet-facing synchronization mechanism designed to provide data synchronization for large-scale interactive web, mobile, and IoT applications. Go to Sync Gateway Docs Sample tutorials to build Mobile applications on the edge. Couchbase Mobile Tutorials" } ]